@@ -22,7 +32,7 @@
**AgentVerse** offers a versatile framework that streamlines the process of creating custom multi-agent environments for large language models (LLMs). Designed to facilitate swift development and customization with minimal effort, our framework empowers researchers to concentrate on their research, rather than being bogged down by implementation details.
-⚠️⚠️⚠️ We're refactoring the code, and the goal is to provide a flexibility to construct simulation(without a predefined goal) and task-solving(with a specific goal) environments. Please note that this README is outdated, we will update it soon. If you require a stable version that exclusively supports simulation environments, you can use [`release-1.0`](https://github.com/OpenBMB/AgentVerse/tree/release-1.0) branch.
+⚠️⚠️⚠️ We're refactoring the code, and the goal is to provide a flexibility to construct simulation(without a predefined goal) and task-solving(with a specific goal) environments. Please note that this README is slightly outdated, we will update it soon. If you require a stable version that exclusively supports simulation environments, you can use [`release-1.0`](https://github.com/OpenBMB/AgentVerse/tree/release-1.0) branch.
---
@@ -35,10 +45,12 @@
- 🛠 **Tools (Plugins) Utilization**: AgentVerse supports the multi-agent environments with tools. Currently, AgentVerse supports tools provided in [BMTools](https://github.com/OpenBMB/BMTools).
## 📰 What's New
+- [2023/10/5] 💡 We release the code of our paper [AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents](https://arxiv.org/abs/2308.10848), and refactor our codebase to enable the creation of both simulation and task-solving environment!
+
- [2023/8/22] 📝 We're excited to share our work-in-progress paper [AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents](https://arxiv.org/abs/2308.10848) related to this repository.
+
-
-You could refer the stay-tuned code in this [branch](https://github.com/OpenBMB/AgentVerse/tree/AgentVerse-TaskSolving).
+
- [2023/6/5] 🎉 We are thrilled to present an array of [demos](#-simple-demo-video), including [NLP Classroom](#nlp-classroom), [Prisoner Dilemma](#prisoner-dilemma), [Software Design](#software-design), [Database Administrator](#database-administrator-dba), and a simple [H5 Pokemon Game](#pokemon) that enables the interaction with the characters in Pokemon! Try out these demos and have fun!
- [2023/5/1] 🚀 [AgentVerse](https://github.com/OpenBMB/AgentVerse) is officially launched!
@@ -57,11 +69,10 @@ AgentVerse is on a mission to revolutionize the multi-agent environment for larg
Also, if you're passionate about advancing the frontiers of multi-agent environments and are eager to dive deeper into research, we invite you to join our team at THUNLP. To explore this exciting opportunity and embark on a collaborative journey with us, please reach out to [chenweize1998@gmail.com](chenweize1998@gmail.com) and [yushengsu.thu@gmail.com](yushengsu.thu@gmail.com) and express your interest. We're keen to welcome motivated individuals like you to our lab!
## 🗓 Coming Soon
-- [ ] Code release of our [paper](https://arxiv.org/abs/2308.10848)
+- [x] Code release of our [paper](https://arxiv.org/abs/2308.10848)
- [ ] Add documentation
- [ ] Support more sophisticated memory for conversation history
- [ ] Add support for local LLM
-- [ ] Auto-generate UI based on the given multi-agent environment
## 👾 Simple Demo Video
@@ -79,7 +90,7 @@ In the NLP class, the professor and students engage in interactive communication
Use the following command to launch the NLP Classroom example:
```bash
-python main_simulation_gui.py --task nlp_classroom_9players
+python main_simulation_gui.py --task simulation/nlp_classroom_9players
```
https://github.com/OpenBMB/AgentVerse/assets/11704492/6ea07850-595e-4a28-a82e-f863011353c2
@@ -90,7 +101,7 @@ A prisoner's Dilemma is a thought experiment that challenges two completely rati
Use the following command to launch the Prisoner Dilemma example:
```bash
-python main_simulation_cli.py --task prisoner_dilemma
+python main_simulation_cli.py --task simulation/prisoner_dilemma
```
https://github.com/OpenBMB/AgentVerse/assets/11704492/017c46e5-c738-4fca-9352-b008e2d518bd
@@ -101,7 +112,7 @@ In the Software Design example, a code writer, a code tester and a code reviewer
Use the following command to launch the Software Design example:
```bash
-python main_demo.py --task sde_team/sde_team_2players
+python main_demo.py --task simulation/sde_team/sde_team_2players
```
https://github.com/OpenBMB/AgentVerse/assets/11704492/5058066a-abee-490d-8659-b4e54661626a
@@ -112,7 +123,7 @@ https://github.com/OpenBMB/AgentVerse/assets/11704492/5058066a-abee-490d-8659-b4
In the database diagnosis scenario, the Chief DBA monitors the system anomalies (e.g., slow queries, locks, crash down). If detected, the domain experts are alerted to analyze root causes, share insights, and suggest optimization solutions together. The Chief DBA then provides a summarized report to the user.
```bash
-python main_simulation_gui.py --task db_diag
+python main_simulation_gui.py --task simulation/db_diag
```
https://github.com/OpenBMB/AgentVerse/assets/11704492/c633419d-afbb-47d4-bb12-6bb512e7af3a
@@ -124,7 +135,7 @@ In the context of the text evaluation scenario, we recommend users explore the [
https://github.com/OpenBMB/AgentVerse/assets/75533759/58f33468-f15b-4bac-ae01-8d0780019f85
#### Pokemon
-In the game, agents can visit shops, train their Pokémon at the gym, and interact with one another. As a player, you take on the role of an agent and can engage with others at any time. There are 6 characters in the Pokémon environment who appeared in Pokemon Emerald: [May](https://bulbapedia.bulbagarden.net/wiki/May_(game)), [Professor Birch](https://bulbapedia.bulbagarden.net/wiki/Professor_Birch), [Steven Stone](https://bulbapedia.bulbagarden.net/wiki/Steven_Stone), [Maxie](https://bulbapedia.bulbagarden.net/wiki/Maxie), [Archie](https://bulbapedia.bulbagarden.net/wiki/Archie) and [Joseph](https://bulbapedia.bulbagarden.net/wiki/Mr._Stone).
+**Currently available only in [`release-1.0`](https://github.com/OpenBMB/AgentVerse/tree/release-1.0)**. In the game, agents can walk around the game world, and interact with one another. As a player, you take on the role of an agent and can engage with others at any time. There are 6 characters in the Pokémon environment who appeared in Pokemon Emerald: [May](https://bulbapedia.bulbagarden.net/wiki/May_(game)), [Professor Birch](https://bulbapedia.bulbagarden.net/wiki/Professor_Birch), [Steven Stone](https://bulbapedia.bulbagarden.net/wiki/Steven_Stone), [Maxie](https://bulbapedia.bulbagarden.net/wiki/Maxie), [Archie](https://bulbapedia.bulbagarden.net/wiki/Archie) and [Joseph](https://bulbapedia.bulbagarden.net/wiki/Mr._Stone).
To launch the Pokemon game, first launch a local server with the following command:
```bash
@@ -162,8 +173,9 @@ https://github.com/OpenBMB/AgentVerse/assets/11704492/4d07da68-f942-4205-b558-f1
- [Contents](#contents)
- [🚀 Getting Started](#-getting-started)
- [Installation](#installation)
- - [CLI Example](#cli-example)
- - [Local Website Demo](#local-website-demo)
+ - [Simulation CLI Example](#simulation-cli-example)
+ - [Simulation Local Website Demo](#simulation-local-website-demo)
+ - [Task-Solving CLI Example](#task-solving-cli-example)
- [💡 Philosophy](#-philosophy)
- [Environment](#environment)
- [Agent](#agent)
@@ -227,23 +239,33 @@ cd BMTools
python setup.py develop
-->
-### CLI Example
+### Simulation CLI Example
You can create a multi-agent environments provided by us. Using the classroom scenario as an example. In this scenario, there are nine agents, one playing the role of a professor and the other eight as students.
```shell
-python3 main.py --task nlp_classroom_9players
+python3 main_simulation_cli.py --task simulation/nlp_classroom_9players
```
-### Local Website Demo
+### Simulation Local Website Demo
We also provide a local website demo for this environment. You can launch it with
```shell
-python3 main_simulation_gui.py --task nlp_classroom_9players
+python3 main_simulation_gui.py --task simulation/nlp_classroom_9players
```
After successfully launching the local server, you can visit [http://127.0.0.1:7860/](http://127.0.0.1:7860/) to view the classroom environment.
+### Task-Solving CLI Example
+
+To run the experiments with the task-solving environment proposed in our [paper](https://arxiv.org/abs/2308.10848), you can use the following command:
+
+```shell
+# Run the Humaneval benchmark using gpt-3.5-turbo
+python3 main_tasksolving_cli.py --task tasksolving/humaneval/gpt-3.5 --dataset_path data/humaneval/test.jsonl --overwrite
+```
+
+You can take a look at `agentverse/tasks/tasksolving` for more experiments we have done in our paper.
## 💡 Philosophy
@@ -374,19 +396,17 @@ Here's a brief overview of each example:
## Citation
If you find this repo helpful, feel free to cite us.
```
-@misc{chen2023agentverse,
- title={AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents},
- author={Weize Chen and Yusheng Su and Jingwei Zuo and Cheng Yang and Chenfei Yuan and Chen Qian and Chi-Min Chan and Yujia Qin and Yaxi Lu and Ruobing Xie and Zhiyuan Liu and Maosong Sun and Jie Zhou},
- year={2023},
- eprint={2308.10848},
- archivePrefix={arXiv},
- primaryClass={cs.CL}
+@article{chen2023agentverse,
+ title={Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents},
+ author={Chen, Weize and Su, Yusheng and Zuo, Jingwei and Yang, Cheng and Yuan, Chenfei and Qian, Chen and Chan, Chi-Min and Qin, Yujia and Lu, Yaxi and Xie, Ruobing and others},
+ journal={arXiv preprint arXiv:2308.10848},
+ year={2023}
}
```
## Contact
-Weize Chen: chenwz21@mails.tsinghua.edu.cn
+Weize Chen: chenweize1998@gmail.com
[Yusheng Su](https://yushengsu-thu.github.io/): yushengsu.thu@gmail.com
From d359c7679ff5479812f20fe617155a74f131fad4 Mon Sep 17 00:00:00 2001
From: mlmz <54172054+minleminzui@users.noreply.github.com>
Date: Mon, 9 Oct 2023 18:18:47 +0800
Subject: [PATCH 011/101] fix: allows users to customize config.yaml of the
task (#53)
---
agentverse/agentverse.py | 4 ++--
agentverse/gui.py | 6 +++---
agentverse/initialization.py | 4 ++--
agentverse/simulation.py | 4 ++--
agentverse/tasks/simulation/sde_team/readme.md | 5 ++++-
agentverse/tasksolving.py | 4 ++--
benchmark.py | 8 +++++---
main_simulation_cli.py | 5 ++++-
main_simulation_gui.py | 5 ++++-
9 files changed, 28 insertions(+), 17 deletions(-)
diff --git a/agentverse/agentverse.py b/agentverse/agentverse.py
index a716b810c..8b1505075 100644
--- a/agentverse/agentverse.py
+++ b/agentverse/agentverse.py
@@ -23,13 +23,13 @@ def __init__(self, agents: List[BaseAgent], environment: BaseEnvironment):
self.environment = environment
@classmethod
- def from_task(cls, task: str):
+ def from_task(cls, task: str, tasks_dir: str):
"""Build an AgentVerse from a task name.
The task name should correspond to a directory in `tasks` directory.
Then this method will load the configuration from the yaml file in that directory.
"""
# Prepare the config of the task
- task_config = prepare_task_config(task)
+ task_config = prepare_task_config(task, tasks_dir)
# Build the agents
agents = []
diff --git a/agentverse/gui.py b/agentverse/gui.py
index ca8738b3d..911207f48 100644
--- a/agentverse/gui.py
+++ b/agentverse/gui.py
@@ -30,7 +30,7 @@ class GUI:
the UI of frontend
"""
- def __init__(self, task: str):
+ def __init__(self, task: str, tasks_dir: str):
"""
init a UI.
default number of students is 0
@@ -38,9 +38,9 @@ def __init__(self, task: str):
self.messages = []
self.task = task
if task == "pipeline_brainstorming":
- self.backend = TaskSolving.from_task(task)
+ self.backend = TaskSolving.from_task(task, tasks_dir)
else:
- self.backend = Simulation.from_task(task)
+ self.backend = Simulation.from_task(task, tasks_dir)
self.turns_remain = 0
self.agent_id = {
self.backend.agents[idx].name: idx
diff --git a/agentverse/initialization.py b/agentverse/initialization.py
index cc6e3cd42..ed21733e0 100644
--- a/agentverse/initialization.py
+++ b/agentverse/initialization.py
@@ -66,9 +66,9 @@ def load_agent(agent_config: Dict) -> BaseAgent:
return agent
-def prepare_task_config(task):
+def prepare_task_config(task, tasks_dir):
"""Read the yaml config of the given task in `tasks` directory."""
- all_task_dir = os.path.join(os.path.dirname(__file__), "tasks")
+ all_task_dir = tasks_dir
task_path = os.path.join(all_task_dir, task)
config_path = os.path.join(task_path, "config.yaml")
if not os.path.exists(task_path):
diff --git a/agentverse/simulation.py b/agentverse/simulation.py
index f4d3dc982..27a439e4b 100644
--- a/agentverse/simulation.py
+++ b/agentverse/simulation.py
@@ -17,13 +17,13 @@ def __init__(self, agents: List[BaseAgent], environment: BaseEnvironment):
self.environment = environment
@classmethod
- def from_task(cls, task: str):
+ def from_task(cls, task: str, tasks_dir: str):
"""Build an AgentVerse from a task name.
The task name should correspond to a directory in `tasks` directory.
Then this method will load the configuration from the yaml file in that directory.
"""
# Prepare the config of the task
- task_config = prepare_task_config(task)
+ task_config = prepare_task_config(task, tasks_dir)
# Build the agents
agents = []
diff --git a/agentverse/tasks/simulation/sde_team/readme.md b/agentverse/tasks/simulation/sde_team/readme.md
index 62faa3117..b022b820b 100644
--- a/agentverse/tasks/simulation/sde_team/readme.md
+++ b/agentverse/tasks/simulation/sde_team/readme.md
@@ -72,14 +72,17 @@ python build_config.py
After generating `config.yaml`, run the `main.py` to start the task.
```python
+import os
from agentverse.agentverse import AgentVerse
from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument("--task", type=str, default="sde_team/sde_team_2players")
+parser.add_argument("--tasks_dir", type=str, default=os.path.join(
+ os.path.dirname(__file__), "agentverse", "tasks"))
args = parser.parse_args()
-agentverse = AgentVerse.from_task(args.task)
+agentverse = AgentVerse.from_task(args.task, args.tasks_dir)
agentverse.run()
```
diff --git a/agentverse/tasksolving.py b/agentverse/tasksolving.py
index b90b81b0e..2bd2bd9b7 100644
--- a/agentverse/tasksolving.py
+++ b/agentverse/tasksolving.py
@@ -23,13 +23,13 @@ def __init__(self, environment: BasicEnvironment, task: str = ""):
self.task = task
@classmethod
- def from_task(cls, task: str):
+ def from_task(cls, task: str, tasks_dir: str):
"""Build an AgentVerse from a task name.
The task name should correspond to a directory in `tasks` directory.
Then this method will load the configuration from the yaml file in that directory.
"""
# Prepare the config of the task
- task_config = prepare_task_config(task)
+ task_config = prepare_task_config(task, tasks_dir)
# Build the environment
env_config = task_config["environment"]
diff --git a/benchmark.py b/benchmark.py
index 304adc6e1..01d2cb232 100644
--- a/benchmark.py
+++ b/benchmark.py
@@ -12,7 +12,9 @@
parser = ArgumentParser()
-parser.add_argument("--task", type=str, default="responsegen")
+parser.add_argument("--task", type=str, default="tasksolving/responsegen")
+parser.add_argument("--tasks_dir", type=str, default=os.path.join(
+ os.path.dirname(__file__), "agentverse", "tasks"))
parser.add_argument("--dataset_path", type=str, required=True)
parser.add_argument("--output_path", type=str, default=None)
parser.add_argument("--has_tools", action="store_true")
@@ -38,7 +40,7 @@ def get_dataloader(task, dataset_path):
else:
os.makedirs(args.output_path, exist_ok=True)
shutil.copyfile(
- f"./agentverse/tasks/{args.task}/config.yaml",
+ f"{args.tasks_dir}/{args.task}/config.yaml",
f"{args.output_path}/config.yaml",
)
@@ -57,7 +59,7 @@ def get_dataloader(task, dataset_path):
assert args.tool_tmp_path is not None
with open(args.tool_tmp_path, "w") as f:
f.write(json.dumps(example["tools"]))
- agentversepipeline = TaskSolving.from_task(args.task)
+ agentversepipeline = TaskSolving.from_task(args.task, args.tasks_dir)
agentversepipeline.environment.set_task_description(example["input"])
# print(args.single_agent)
# print(args.discussion_mode)
diff --git a/main_simulation_cli.py b/main_simulation_cli.py
index a6be9e66d..48f56e567 100644
--- a/main_simulation_cli.py
+++ b/main_simulation_cli.py
@@ -1,3 +1,4 @@
+import os
import logging
from argparse import ArgumentParser
@@ -6,10 +7,12 @@
parser = ArgumentParser()
parser.add_argument("--task", type=str, default="simulation/prisoner_dilemma")
+parser.add_argument("--tasks_dir", type=str, default=os.path.join(
+ os.path.dirname(__file__), "agentverse", "tasks"))
parser.add_argument("--debug", action="store_true")
args = parser.parse_args()
logger.set_level(logging.DEBUG if args.debug else logging.INFO)
-agentverse = Simulation.from_task(args.task)
+agentverse = Simulation.from_task(args.task, args.tasks_dir)
agentverse.run()
diff --git a/main_simulation_gui.py b/main_simulation_gui.py
index a7c4ad42f..576b980c5 100644
--- a/main_simulation_gui.py
+++ b/main_simulation_gui.py
@@ -1,9 +1,12 @@
+import os
from agentverse.gui import GUI
from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument("--task", type=str, default="simulation/nlp_classroom_9players")
+parser.add_argument("--tasks_dir", type=str, default=os.path.join(
+ os.path.dirname(__file__), "agentverse", "tasks"))
args = parser.parse_args()
-ui = GUI(args.task)
+ui = GUI(args.task, args.tasks_dir)
ui.launch()
From ff67cff5073a577e775afbee581941be96c0fbfb Mon Sep 17 00:00:00 2001
From: Weize Chen <32613237+chenweize1998@users.noreply.github.com>
Date: Mon, 9 Oct 2023 22:36:51 +0800
Subject: [PATCH 012/101] ci skip. Update README.md
---
README.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/README.md b/README.md
index ed3a7526b..45f91e957 100644
--- a/README.md
+++ b/README.md
@@ -20,6 +20,7 @@
+
@@ -68,6 +69,8 @@ AgentVerse is on a mission to revolutionize the multi-agent environment for larg
Also, if you're passionate about advancing the frontiers of multi-agent environments and are eager to dive deeper into research, we invite you to join our team at THUNLP. To explore this exciting opportunity and embark on a collaborative journey with us, please reach out to [chenweize1998@gmail.com](chenweize1998@gmail.com) and [yushengsu.thu@gmail.com](yushengsu.thu@gmail.com) and express your interest. We're keen to welcome motivated individuals like you to our lab!
+👉Also, our Discord: https://discord.gg/cnutfCtC.
+
## 🗓 Coming Soon
- [x] Code release of our [paper](https://arxiv.org/abs/2308.10848)
- [ ] Add documentation
From 0360286d5125c01c551e9eb9b2076500d150aab0 Mon Sep 17 00:00:00 2001
From: chenweize1998
Date: Tue, 10 Oct 2023 09:39:36 +0800
Subject: [PATCH 013/101] fix: fix the incompatible simulation configs. ci skip
---
.../simulation_agent/prisoner_dilemma.py | 4 ++--
.../tasks/simulation/alice_home/config.yaml | 4 ++++
.../tasks/simulation/db_diag/config.yaml | 6 ++++++
.../math_problem_2players_tools/config.yaml | 4 ++++
.../nlp_classroom_3players/config.yaml | 7 +++++++
.../config.yaml | 6 ++++++
.../nlp_classroom_9players/config.yaml | 18 ++++++++++++++++++
.../nlp_classroom_9players_group/config.yaml | 18 ++++++++++++++++++
.../tasks/simulation/pokemon/config.yaml | 12 ++++++++++++
.../simulation/prisoner_dilemma/config.yaml | 6 ++++++
.../sde_team_2players/partial_config.yaml | 8 +++++++-
.../sde_team/sde_team_3players/config.yaml | 5 ++++-
12 files changed, 94 insertions(+), 4 deletions(-)
diff --git a/agentverse/agents/simulation_agent/prisoner_dilemma.py b/agentverse/agents/simulation_agent/prisoner_dilemma.py
index eb4390ecb..bf257c168 100644
--- a/agentverse/agents/simulation_agent/prisoner_dilemma.py
+++ b/agentverse/agents/simulation_agent/prisoner_dilemma.py
@@ -6,8 +6,8 @@
from agentverse.message import Message
-#from . import agent_registry
-#from .base import BaseAgent
+# from . import agent_registry
+# from .base import BaseAgent
from agentverse.agents import agent_registry
from agentverse.agents.base import BaseAgent
diff --git a/agentverse/tasks/simulation/alice_home/config.yaml b/agentverse/tasks/simulation/alice_home/config.yaml
index afec6e036..1c79377da 100644
--- a/agentverse/tasks/simulation/alice_home/config.yaml
+++ b/agentverse/tasks/simulation/alice_home/config.yaml
@@ -84,6 +84,8 @@ agents:
llm_type: gpt-4
temperature: 0.3
max_tokens: 128
+ output_parser:
+ type: alice_home
current_time: "2023-04-01 07:00:00"
-
agent_type: reflection
@@ -120,6 +122,8 @@ agents:
llm_type: gpt-4
temperature: 0.3
max_tokens: 128
+ output_parser:
+ type: alice_home
current_time: "2023-04-01 07:00:00"
tools: ~
\ No newline at end of file
diff --git a/agentverse/tasks/simulation/db_diag/config.yaml b/agentverse/tasks/simulation/db_diag/config.yaml
index 1e4e055f7..f89d8cd19 100644
--- a/agentverse/tasks/simulation/db_diag/config.yaml
+++ b/agentverse/tasks/simulation/db_diag/config.yaml
@@ -211,6 +211,8 @@ agents:
model: gpt-4
temperature: 0.7
max_tokens: 1024
+ output_parser:
+ type: db_diag
tools: *tools
verbose: true
-
@@ -233,6 +235,8 @@ agents:
model: gpt-4
temperature: 0.7
max_tokens: 512
+ output_parser:
+ type: db_diag
tools: *tools
verbose: true
-
@@ -255,5 +259,7 @@ agents:
model: gpt-4
temperature: 0.7
max_tokens: 512
+ output_parser:
+ type: db_diag
tools: *tools
verbose: true
\ No newline at end of file
diff --git a/agentverse/tasks/simulation/math_problem_2players_tools/config.yaml b/agentverse/tasks/simulation/math_problem_2players_tools/config.yaml
index 2a345b54b..1599d8356 100644
--- a/agentverse/tasks/simulation/math_problem_2players_tools/config.yaml
+++ b/agentverse/tasks/simulation/math_problem_2players_tools/config.yaml
@@ -98,6 +98,8 @@ agents:
model: text-davinci-003
temperature: 0.7
max_tokens: 250
+ output_parser:
+ type: math_problem_2players_tools
tools: *tools
- agent_type: tool
@@ -112,4 +114,6 @@ agents:
model: text-davinci-003
temperature: 0.7
max_tokens: 250
+ output_parser:
+ type: math_problem_2players_tools
tools: *tools
diff --git a/agentverse/tasks/simulation/nlp_classroom_3players/config.yaml b/agentverse/tasks/simulation/nlp_classroom_3players/config.yaml
index 41fa1cb8f..234891698 100644
--- a/agentverse/tasks/simulation/nlp_classroom_3players/config.yaml
+++ b/agentverse/tasks/simulation/nlp_classroom_3players/config.yaml
@@ -40,6 +40,8 @@ agents:
model: 'gpt-4'
temperature: 0.7
max_tokens: 250
+ output_parser:
+ type: nlp_classroom_3players
- agent_type: conversation
name: Student Beta
role_description: You are Beta, a student curious about Natural Language Processing and you want to learn some basic concepts of NLP. You know nothing about the area so you will ask lots of questions.
@@ -51,6 +53,8 @@ agents:
model: 'gpt-4'
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_3players
- agent_type: conversation
name: Teaching Assistant Gamma
role_description: You are Gamma, a teaching assistant of the Natural Language Processing module. You mostly help with logistics and marking, but occasionally handles questions. Your answer should be less than 100 words.
@@ -62,5 +66,8 @@ agents:
model: gpt-4
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_3players
+
tools:
diff --git a/agentverse/tasks/simulation/nlp_classroom_3players_withtool/config.yaml b/agentverse/tasks/simulation/nlp_classroom_3players_withtool/config.yaml
index be2e07084..336c7e25e 100644
--- a/agentverse/tasks/simulation/nlp_classroom_3players_withtool/config.yaml
+++ b/agentverse/tasks/simulation/nlp_classroom_3players_withtool/config.yaml
@@ -160,6 +160,8 @@ agents:
model: text-davinci-003
temperature: 0.7
max_tokens: 250
+ output_parser:
+ type: nlp_classroom_3players_withtool
memory:
memory_type: chat_history
verbose: true
@@ -182,6 +184,8 @@ agents:
model: text-davinci-003
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_3players_withtool
tools: *tools
verbose: true
- agent_type: tool
@@ -203,5 +207,7 @@ agents:
model: text-davinci-003
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_3players_withtool
tools: *tools
verbose: true
diff --git a/agentverse/tasks/simulation/nlp_classroom_9players/config.yaml b/agentverse/tasks/simulation/nlp_classroom_9players/config.yaml
index 6ec120a1d..2217a6223 100644
--- a/agentverse/tasks/simulation/nlp_classroom_9players/config.yaml
+++ b/agentverse/tasks/simulation/nlp_classroom_9players/config.yaml
@@ -99,6 +99,8 @@ agents:
model: "gpt-4"
temperature: 0.7
max_tokens: 250
+ output_parser:
+ type: nlp_classroom_9players
memory:
memory_type: chat_history
-
@@ -113,6 +115,8 @@ agents:
model: "gpt-4"
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_9players
-
agent_type: conversation
name: Student Amelia
@@ -125,6 +129,8 @@ agents:
model: "gpt-4"
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_9players
-
agent_type: conversation
name: Student Ethan
@@ -137,6 +143,8 @@ agents:
model: "gpt-4"
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_9players
-
agent_type: conversation
name: Student Charlotte
@@ -149,6 +157,8 @@ agents:
model: "gpt-4"
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_9players
-
agent_type: conversation
name: Student Mason
@@ -161,6 +171,8 @@ agents:
model: "gpt-4"
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_9players
-
agent_type: conversation
name: Student Ava
@@ -173,6 +185,8 @@ agents:
model: "gpt-4"
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_9players
-
agent_type: conversation
name: Student Noah
@@ -185,6 +199,8 @@ agents:
model: "gpt-4"
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_9players
-
agent_type: conversation
name: Student Emma
@@ -197,5 +213,7 @@ agents:
model: "gpt-4"
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_9players
tools:
diff --git a/agentverse/tasks/simulation/nlp_classroom_9players_group/config.yaml b/agentverse/tasks/simulation/nlp_classroom_9players_group/config.yaml
index 42221521e..b8d9d0319 100644
--- a/agentverse/tasks/simulation/nlp_classroom_9players_group/config.yaml
+++ b/agentverse/tasks/simulation/nlp_classroom_9players_group/config.yaml
@@ -118,6 +118,8 @@ agents:
llm_type: text-davinci-003
temperature: 0.7
max_tokens: 250
+ output_parser:
+ type: nlp_classroom_9players_group
memory:
memory_type: chat_history
-
@@ -131,6 +133,8 @@ agents:
llm_type: text-davinci-003
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_9players_group
-
agent_type: conversation
name: Student Amelia
@@ -142,6 +146,8 @@ agents:
llm_type: text-davinci-003
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_9players_group
-
agent_type: conversation
name: Student Ethan
@@ -153,6 +159,8 @@ agents:
llm_type: text-davinci-003
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_9players_group
-
agent_type: conversation
name: Student Charlotte
@@ -164,6 +172,8 @@ agents:
llm_type: text-davinci-003
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_9players_group
-
agent_type: conversation
name: Student Mason
@@ -175,6 +185,8 @@ agents:
llm_type: text-davinci-003
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_9players_group
-
agent_type: conversation
name: Student Ava
@@ -186,6 +198,8 @@ agents:
llm_type: text-davinci-003
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_9players_group
-
agent_type: conversation
name: Student Noah
@@ -197,6 +211,8 @@ agents:
llm_type: text-davinci-003
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_9players_group
-
agent_type: conversation
name: Student Emma
@@ -208,5 +224,7 @@ agents:
llm_type: text-davinci-003
temperature: 0.7
max_tokens: 100
+ output_parser:
+ type: nlp_classroom_9players_group
tools:
\ No newline at end of file
diff --git a/agentverse/tasks/simulation/pokemon/config.yaml b/agentverse/tasks/simulation/pokemon/config.yaml
index b032a501e..0a152de8d 100644
--- a/agentverse/tasks/simulation/pokemon/config.yaml
+++ b/agentverse/tasks/simulation/pokemon/config.yaml
@@ -84,6 +84,8 @@ agents:
memory:
memory_type: chat_history
prompt_template: *prompt
+ output_parser:
+ type: pokemon
llm:
llm_type: gpt-3.5-turbo
model: 'gpt-3.5-turbo'
@@ -102,6 +104,8 @@ agents:
memory:
memory_type: chat_history
prompt_template: *prompt
+ output_parser:
+ type: pokemon
llm:
llm_type: gpt-3.5-turbo
model: gpt-3.5-turbo
@@ -120,6 +124,8 @@ agents:
memory:
memory_type: chat_history
prompt_template: *prompt
+ output_parser:
+ type: pokemon
llm:
llm_type: gpt-3.5-turbo
model: gpt-3.5-turbo
@@ -138,6 +144,8 @@ agents:
memory:
memory_type: chat_history
prompt_template: *prompt
+ output_parser:
+ type: pokemon
llm:
llm_type: gpt-3.5-turbo
model: gpt-3.5-turbo
@@ -156,6 +164,8 @@ agents:
memory:
memory_type: chat_history
prompt_template: *prompt
+ output_parser:
+ type: pokemon
llm:
llm_type: gpt-3.5-turbo
model: gpt-3.5-turbo
@@ -174,6 +184,8 @@ agents:
memory:
memory_type: chat_history
prompt_template: *prompt
+ output_parser:
+ type: pokemon
llm:
llm_type: gpt-3.5-turbo
model: gpt-3.5-turbo
diff --git a/agentverse/tasks/simulation/prisoner_dilemma/config.yaml b/agentverse/tasks/simulation/prisoner_dilemma/config.yaml
index 6b2b79005..dc03e4120 100644
--- a/agentverse/tasks/simulation/prisoner_dilemma/config.yaml
+++ b/agentverse/tasks/simulation/prisoner_dilemma/config.yaml
@@ -56,6 +56,8 @@ agents:
llm_type: gpt-4
temperature: 1.2
max_tokens: 200
+ output_parser:
+ type: prisoner_dilemma
- agent_type: prisoner
name: Suspect1
personality: "You are a Sophisticated Egoist, you always seek for your personal interests best"
@@ -76,6 +78,8 @@ agents:
llm_type: gpt-4
temperature: 1.2
max_tokens: 100
+ output_parser:
+ type: prisoner_dilemma
- agent_type: prisoner
name: Suspect2
personality: ""
@@ -96,5 +100,7 @@ agents:
llm_type: gpt-4
temperature: 1.2
max_tokens: 100
+ output_parser:
+ type: prisoner_dilemma
tools:
diff --git a/agentverse/tasks/simulation/sde_team/sde_team_2players/partial_config.yaml b/agentverse/tasks/simulation/sde_team/sde_team_2players/partial_config.yaml
index e01f19312..881b0ef31 100644
--- a/agentverse/tasks/simulation/sde_team/sde_team_2players/partial_config.yaml
+++ b/agentverse/tasks/simulation/sde_team/sde_team_2players/partial_config.yaml
@@ -159,6 +159,8 @@ agents:
llm_type: gpt-3.5-turbo
temperature: 0.
max_tokens: 1024
+ output_parser:
+ type: sde_team/sde_team_2players
- agent_type: conversation
name: code_tester
@@ -172,6 +174,8 @@ agents:
llm_type: gpt-3.5-turbo
temperature: 0.
max_tokens: 256
+ output_parser:
+ type: sde_team/sde_team_2players
- agent_type: conversation
name: code_reviewer
@@ -184,4 +188,6 @@ agents:
llm:
llm_type: gpt-3.5-turbo
temperature: 0.
- max_tokens: 1024
\ No newline at end of file
+ max_tokens: 1024
+ output_parser:
+ type: sde_team/sde_team_2players
\ No newline at end of file
diff --git a/agentverse/tasks/simulation/sde_team/sde_team_3players/config.yaml b/agentverse/tasks/simulation/sde_team/sde_team_3players/config.yaml
index fe4ff4369..a4b0ef661 100644
--- a/agentverse/tasks/simulation/sde_team/sde_team_3players/config.yaml
+++ b/agentverse/tasks/simulation/sde_team/sde_team_3players/config.yaml
@@ -227,6 +227,7 @@ agents:
llm_type: gpt-3.5-turbo
temperature: 0.3
max_tokens: 1024
+ output_parser: sde_team/sde_team_3players
- agent_type: conversation
name: code_reviewer
@@ -240,6 +241,7 @@ agents:
llm_type: gpt-3.5-turbo
temperature: 0.3
max_tokens: 1024
+ output_parser: sde_team/sde_team_3players
- agent_type: conversation
name: unit_test_generator
@@ -249,4 +251,5 @@ agents:
llm:
llm_type: gpt-3.5-turbo
temperature: 1.0
- max_tokens: 1024
\ No newline at end of file
+ max_tokens: 1024
+ output_parser: sde_team/sde_team_3players
\ No newline at end of file
From ccf4319620ce9d2be01e3bb9b6f69f73b069e8a7 Mon Sep 17 00:00:00 2001
From: mlmz <54172054+minleminzui@users.noreply.github.com>
Date: Tue, 10 Oct 2023 16:09:17 +0800
Subject: [PATCH 014/101] fix: complete the command line (#55)
* fix: complete the command line for agentverse's demos to make it more convenient for users to use
* fix: better cli command name
* fix: change default tasks_dir
---------
Co-authored-by: Weize Chen <32613237+chenweize1998@users.noreply.github.com>
Co-authored-by: chenweize1998
---
.github/workflows/test.yml | 3 +-
agentverse_command/__init__.py | 0
.../benchmark.py | 13 +++++--
.../main_simulation_cli.py | 17 ++++++--
agentverse_command/main_simulation_gui.py | 21 ++++++++++
.../main_tasksolving_cli.py | 18 +++++++--
main_simulation_gui.py | 12 ------
setup.py | 39 ++++++++++++-------
8 files changed, 85 insertions(+), 38 deletions(-)
create mode 100644 agentverse_command/__init__.py
rename benchmark.py => agentverse_command/benchmark.py (94%)
rename main_simulation_cli.py => agentverse_command/main_simulation_cli.py (58%)
create mode 100644 agentverse_command/main_simulation_gui.py
rename main_tasksolving_cli.py => agentverse_command/main_tasksolving_cli.py (53%)
delete mode 100644 main_simulation_gui.py
diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml
index fa9454dfb..bf14b09df 100644
--- a/.github/workflows/test.yml
+++ b/.github/workflows/test.yml
@@ -36,5 +36,6 @@ jobs:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
OPENAI_ORGANIZATION: ${{ secrets.OPENAI_ORGANIZATION }}
run: |
- python benchmark.py --task tasksolving/mgsm/gpt-3.5 --dataset_path data/mgsm/test_sample.jsonl --overwrite --output_path ci_smoke_test_output
+ python setup.py develop
+ python agentverse_command/benchmark.py --task tasksolving/mgsm/gpt-3.5 --dataset_path data/mgsm/test_sample.jsonl --overwrite --output_path ci_smoke_test_output --tasks_dir ./agentverse/tasks
python evaluate_math.py --path ci_smoke_test_output/results.jsonl --ci_smoke_test
\ No newline at end of file
diff --git a/agentverse_command/__init__.py b/agentverse_command/__init__.py
new file mode 100644
index 000000000..e69de29bb
diff --git a/benchmark.py b/agentverse_command/benchmark.py
similarity index 94%
rename from benchmark.py
rename to agentverse_command/benchmark.py
index 01d2cb232..e9da7cf0a 100644
--- a/benchmark.py
+++ b/agentverse_command/benchmark.py
@@ -13,8 +13,11 @@
parser = ArgumentParser()
parser.add_argument("--task", type=str, default="tasksolving/responsegen")
-parser.add_argument("--tasks_dir", type=str, default=os.path.join(
- os.path.dirname(__file__), "agentverse", "tasks"))
+parser.add_argument(
+ "--tasks_dir",
+ type=str,
+ default=os.path.join(os.path.dirname(__file__), "..", "agentverse", "tasks"),
+)
parser.add_argument("--dataset_path", type=str, required=True)
parser.add_argument("--output_path", type=str, default=None)
parser.add_argument("--has_tools", action="store_true")
@@ -32,7 +35,7 @@ def get_dataloader(task, dataset_path):
return dataloader_registry.build(task, path=dataset_path)
-if __name__ == "__main__":
+def cli_main():
dataloader = get_dataloader(args.task, args.dataset_path)
if args.output_path is None:
os.makedirs(f"./results/{args.task}", exist_ok=True)
@@ -78,3 +81,7 @@ def get_dataloader(task, dataset_path):
)
f.flush()
f.close()
+
+
+if __name__ == "__main__":
+ cli_main()
diff --git a/main_simulation_cli.py b/agentverse_command/main_simulation_cli.py
similarity index 58%
rename from main_simulation_cli.py
rename to agentverse_command/main_simulation_cli.py
index 48f56e567..af938e112 100644
--- a/main_simulation_cli.py
+++ b/agentverse_command/main_simulation_cli.py
@@ -7,12 +7,21 @@
parser = ArgumentParser()
parser.add_argument("--task", type=str, default="simulation/prisoner_dilemma")
-parser.add_argument("--tasks_dir", type=str, default=os.path.join(
- os.path.dirname(__file__), "agentverse", "tasks"))
+parser.add_argument(
+ "--tasks_dir",
+ type=str,
+ default=os.path.join(os.path.dirname(__file__), "..", "agentverse", "tasks"),
+)
parser.add_argument("--debug", action="store_true")
args = parser.parse_args()
logger.set_level(logging.DEBUG if args.debug else logging.INFO)
-agentverse = Simulation.from_task(args.task, args.tasks_dir)
-agentverse.run()
+
+def cli_main():
+ agentverse = Simulation.from_task(args.task, args.tasks_dir)
+ agentverse.run()
+
+
+if __name__ == "__main__":
+ cli_main()
diff --git a/agentverse_command/main_simulation_gui.py b/agentverse_command/main_simulation_gui.py
new file mode 100644
index 000000000..b91df38c0
--- /dev/null
+++ b/agentverse_command/main_simulation_gui.py
@@ -0,0 +1,21 @@
+import os
+from agentverse.gui import GUI
+from argparse import ArgumentParser
+
+parser = ArgumentParser()
+parser.add_argument("--task", type=str, default="simulation/nlp_classroom_9players")
+parser.add_argument(
+ "--tasks_dir",
+ type=str,
+ default=os.path.join(os.path.dirname(__file__), "..", "agentverse", "tasks"),
+)
+args = parser.parse_args()
+
+
+def cli_main():
+ ui = GUI(args.task, args.tasks_dir)
+ ui.launch()
+
+
+if __name__ == "__main__":
+ cli_main()
diff --git a/main_tasksolving_cli.py b/agentverse_command/main_tasksolving_cli.py
similarity index 53%
rename from main_tasksolving_cli.py
rename to agentverse_command/main_tasksolving_cli.py
index aea6d1433..2382adf90 100644
--- a/main_tasksolving_cli.py
+++ b/agentverse_command/main_tasksolving_cli.py
@@ -1,3 +1,4 @@
+import os
import logging
# from agentverse.agentverse import AgentVerse
@@ -11,12 +12,23 @@
parser.add_argument(
"--task",
type=str,
- default="tasksolving/pipeline_brainstorming",
+ default="tasksolving/brainstorming",
)
parser.add_argument("--debug", action="store_true")
+parser.add_argument(
+ "--tasks_dir",
+ type=str,
+ default=os.path.join(os.path.dirname(__file__), "..", "agentverse", "tasks"),
+)
args = parser.parse_args()
logger.set_level(logging.DEBUG if args.debug else logging.INFO)
-agentversepipeline = TaskSolving.from_task(args.task)
-agentversepipeline.run()
+
+def cli_main():
+ agentversepipeline = TaskSolving.from_task(args.task, args.tasks_dir)
+ agentversepipeline.run()
+
+
+if __name__ == "__main__":
+ cli_main()
diff --git a/main_simulation_gui.py b/main_simulation_gui.py
deleted file mode 100644
index 576b980c5..000000000
--- a/main_simulation_gui.py
+++ /dev/null
@@ -1,12 +0,0 @@
-import os
-from agentverse.gui import GUI
-from argparse import ArgumentParser
-
-parser = ArgumentParser()
-parser.add_argument("--task", type=str, default="simulation/nlp_classroom_9players")
-parser.add_argument("--tasks_dir", type=str, default=os.path.join(
- os.path.dirname(__file__), "agentverse", "tasks"))
-args = parser.parse_args()
-
-ui = GUI(args.task, args.tasks_dir)
-ui.launch()
diff --git a/setup.py b/setup.py
index f4e8f2128..4d6f5f474 100644
--- a/setup.py
+++ b/setup.py
@@ -2,8 +2,8 @@
from setuptools.command.develop import develop
import subprocess
-# with open("requirements.txt", "r") as f:
-# requirements = f.read().splitlines()
+with open("requirements.txt", "r") as f:
+ requirements = f.read().splitlines()
with open("README.md", "r", encoding='utf8') as fh:
long_description = fh.read()
@@ -24,18 +24,27 @@
"Operating System :: OS Independent",
],
python_requires=">=3.9",
- install_requires=[
- "PyYAML",
- "fastapi",
- "uvicorn",
- "py3langid",
- "iso-639",
- "openai",
- "opencv-python",
- "gradio",
- "httpx[socks]",
- "astunparse",
- "langchain",
- ],
+ # install_requires=[
+ # "PyYAML",
+ # "fastapi",
+ # "uvicorn",
+ # "py3langid",
+ # "iso-639",
+ # "openai",
+ # "opencv-python",
+ # "gradio",
+ # "httpx[socks]",
+ # "astunparse",
+ # "langchain",
+ # ],
+ install_requires=requirements,
include_package_data = True,
+ entry_points={
+ "console_scripts": [
+ "agentverse-benchmark = agentverse_command.benchmark:cli_main",
+ "agentverse-simulation = agentverse_command.main_simulation_cli:cli_main",
+ "agentverse-simulation-gui = agentverse_command.main_simulation_gui:cli_main",
+ "agentverse-tasksolving = agentverse_command.main_tasksolving_cli:cli_main",
+ ],
+ },
)
From 6bdce1d5d9a10a674a48f1d06910837afc219df2 Mon Sep 17 00:00:00 2001
From: chenweize1998
Date: Tue, 10 Oct 2023 16:26:26 +0800
Subject: [PATCH 015/101] refactor: reorganize script location
---
.github/workflows/test.yml | 2 +-
.gitignore | 3 ++-
.../rules/executor/coverage_test.py | 14 +++++++++-----
.../tasksolving/tool_using/24point/config.yaml | 2 +-
.../tasks/tasksolving/tool_using/bmi/config.yaml | 2 +-
.../tasksolving/tool_using/bookclub/config.yaml | 2 +-
.../tasks/tasksolving/tool_using/car/config.yaml | 2 +-
.../tasks/tasksolving/tool_using/date/config.yaml | 2 +-
.../tasks/tasksolving/tool_using/diy/config.yaml | 2 +-
.../tasks/tasksolving/tool_using/party/config.yaml | 2 +-
.../tasksolving/tool_using/sudoku/config.yaml | 2 +-
.../tasksolving/tool_using/tools_simplified.json | 0
.../tasksolving/tool_using/trending/config.yaml | 2 +-
.../tasksolving/tool_using/vacation/config.yaml | 2 +-
scripts/__init__.py | 0
.../evaluate_commongen.py | 0
evaluate_logic.py => scripts/evaluate_logic.py | 0
evaluate_math.py => scripts/evaluate_math.py | 0
.../evaluate_responsegen.py | 0
test_pokemon_env.py | 4 ----
20 files changed, 22 insertions(+), 21 deletions(-)
rename tools_simplified.json => agentverse/tasks/tasksolving/tool_using/tools_simplified.json (100%)
create mode 100644 scripts/__init__.py
rename evaluate_commongen.py => scripts/evaluate_commongen.py (100%)
rename evaluate_logic.py => scripts/evaluate_logic.py (100%)
rename evaluate_math.py => scripts/evaluate_math.py (100%)
rename evaluate_responsegen.py => scripts/evaluate_responsegen.py (100%)
delete mode 100644 test_pokemon_env.py
diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml
index bf14b09df..e593f0609 100644
--- a/.github/workflows/test.yml
+++ b/.github/workflows/test.yml
@@ -38,4 +38,4 @@ jobs:
run: |
python setup.py develop
python agentverse_command/benchmark.py --task tasksolving/mgsm/gpt-3.5 --dataset_path data/mgsm/test_sample.jsonl --overwrite --output_path ci_smoke_test_output --tasks_dir ./agentverse/tasks
- python evaluate_math.py --path ci_smoke_test_output/results.jsonl --ci_smoke_test
\ No newline at end of file
+ python scripts/evaluate_math.py --path ci_smoke_test_output/results.jsonl --ci_smoke_test
\ No newline at end of file
diff --git a/.gitignore b/.gitignore
index 66b07b913..bbfb5aec0 100644
--- a/.gitignore
+++ b/.gitignore
@@ -172,4 +172,5 @@ raw/
results
tmp/
data/toolbench
-logs/
\ No newline at end of file
+logs/
+ci_smoke_test_output/
\ No newline at end of file
diff --git a/agentverse/environments/tasksolving_env/rules/executor/coverage_test.py b/agentverse/environments/tasksolving_env/rules/executor/coverage_test.py
index ce3938707..37c3073ba 100644
--- a/agentverse/environments/tasksolving_env/rules/executor/coverage_test.py
+++ b/agentverse/environments/tasksolving_env/rules/executor/coverage_test.py
@@ -29,16 +29,18 @@ def step(
*args,
**kwargs,
) -> Any:
- from evaluate_commongen import scoring
+ from scripts.evaluate_commongen import scoring
- coverage, missing_tokens = scoring([s.content for s in solution], [task_description])
+ coverage, missing_tokens = scoring(
+ [s.content for s in solution], [task_description]
+ )
if len(missing_tokens[0]) == 0:
missing_tokens = "No missing tokens."
else:
missing_tokens = ", ".join(missing_tokens[0])
result = f"Coverage: {coverage*100:.2f}%\nMissing Tokens: {missing_tokens}"
return [ExecutorMessage(content=result)]
-
+
async def astep(
self,
agent: ExecutorAgent,
@@ -47,9 +49,11 @@ async def astep(
*args,
**kwargs,
) -> Any:
- from evaluate_commongen import scoring
+ from scripts.evaluate_commongen import scoring
- coverage, missing_tokens = scoring([s.content for s in solution], [task_description])
+ coverage, missing_tokens = scoring(
+ [s.content for s in solution], [task_description]
+ )
if len(missing_tokens[0]) == 0:
missing_tokens = "No missing tokens."
else:
diff --git a/agentverse/tasks/tasksolving/tool_using/24point/config.yaml b/agentverse/tasks/tasksolving/tool_using/24point/config.yaml
index 11a185f3b..4e65b3f5b 100644
--- a/agentverse/tasks/tasksolving/tool_using/24point/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/24point/config.yaml
@@ -2,7 +2,7 @@ cnt_agents: &cnt_agents 3
cnt_tool_agents: &cnt_tool_agents 2
max_rounds: &max_rounds 5
max_criticizing_rounds: 3
-tool_config: &tool_config tools_simplified.json
+tool_config: &tool_config agentverse/tasks/tasksolving/tool_using/tools_simplified.json
task_description: Recently, it has become popular in the AI field to verify the mathematical reasoning abilities of large language models by observing if they can solve the "24-Point Game." What is this game? Does it have a code-based solution? If it does, provide a Python code along with test cases and test its functionality. What are some other similar games that can be used to test the models' mathematical reasoning abilities?
diff --git a/agentverse/tasks/tasksolving/tool_using/bmi/config.yaml b/agentverse/tasks/tasksolving/tool_using/bmi/config.yaml
index 95ac4cf6a..97411d782 100644
--- a/agentverse/tasks/tasksolving/tool_using/bmi/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/bmi/config.yaml
@@ -2,7 +2,7 @@ cnt_agents: &cnt_agents 3
cnt_tool_agents: &cnt_tool_agents 2
max_rounds: &max_rounds 5
max_criticizing_rounds: 3
-tool_config: &tool_config tools_simplified.json
+tool_config: &tool_config agentverse/tasks/tasksolving/tool_using/tools_simplified.json
task_description: I want to lose 5kg in the next 2 months. I weigh 70kg, am 170cm tall, and my age is 25. Calculate my BMI and based on that, suggest a workout routine and daily calorie intake to help me achieve my goal.
diff --git a/agentverse/tasks/tasksolving/tool_using/bookclub/config.yaml b/agentverse/tasks/tasksolving/tool_using/bookclub/config.yaml
index 98385570b..abc03fe11 100644
--- a/agentverse/tasks/tasksolving/tool_using/bookclub/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/bookclub/config.yaml
@@ -2,7 +2,7 @@ cnt_agents: &cnt_agents 3
cnt_tool_agents: &cnt_tool_agents 2
max_rounds: &max_rounds 5
max_criticizing_rounds: 3
-tool_config: &tool_config tools_simplified.json
+tool_config: &tool_config agentverse/tasks/tasksolving/tool_using/tools_simplified.json
task_description: I want to kick off a book club with my friends. Can you tell me the top 5 bestselling books this month, gather the content summary for each, and find online platforms where we can buy or borrow them?
diff --git a/agentverse/tasks/tasksolving/tool_using/car/config.yaml b/agentverse/tasks/tasksolving/tool_using/car/config.yaml
index 1df8ccd7e..4344c707e 100644
--- a/agentverse/tasks/tasksolving/tool_using/car/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/car/config.yaml
@@ -2,7 +2,7 @@ cnt_agents: &cnt_agents 4
cnt_tool_agents: &cnt_tool_agents 3
max_rounds: &max_rounds 5
max_criticizing_rounds: 3
-tool_config: &tool_config tools_simplified.json
+tool_config: &tool_config agentverse/tasks/tasksolving/tool_using/tools_simplified.json
task_description: I am planning to buy a new car. Could you help me compare the features and prices of the latest models of Tesla, Ford, and Toyota? Include details about range, charging time, safety features, and after-sales service. Also, provide a brief analysis of the pros and cons of each car.
diff --git a/agentverse/tasks/tasksolving/tool_using/date/config.yaml b/agentverse/tasks/tasksolving/tool_using/date/config.yaml
index 4dc613e90..6e12f1746 100644
--- a/agentverse/tasks/tasksolving/tool_using/date/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/date/config.yaml
@@ -2,7 +2,7 @@ cnt_agents: &cnt_agents 4
cnt_tool_agents: &cnt_tool_agents 3
max_rounds: &max_rounds 5
max_criticizing_rounds: 3
-tool_config: &tool_config tools_simplified.json
+tool_config: &tool_config agentverse/tasks/tasksolving/tool_using/tools_simplified.json
task_description: I am planning a date with my girlfriend this week, please search for a good movie theater and a restaurant near Tsinghua University in Beijing and recommend a good movie to watch. Please search the web.
diff --git a/agentverse/tasks/tasksolving/tool_using/diy/config.yaml b/agentverse/tasks/tasksolving/tool_using/diy/config.yaml
index 8ea9b8ea3..8fa2f173c 100644
--- a/agentverse/tasks/tasksolving/tool_using/diy/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/diy/config.yaml
@@ -2,7 +2,7 @@ cnt_agents: &cnt_agents 4
cnt_tool_agents: &cnt_tool_agents 3
max_rounds: &max_rounds 5
max_criticizing_rounds: 3
-tool_config: &tool_config tools_simplified.json
+tool_config: &tool_config agentverse/tasks/tasksolving/tool_using/tools_simplified.json
task_description: I've recently taken an interest in DIY home projects. Search for beginner-friendly DIY projects that can be completed over the weekend. Also, provide a list of materials required and a step-by-step guide for each project.
diff --git a/agentverse/tasks/tasksolving/tool_using/party/config.yaml b/agentverse/tasks/tasksolving/tool_using/party/config.yaml
index 76134a374..df7fad0bb 100644
--- a/agentverse/tasks/tasksolving/tool_using/party/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/party/config.yaml
@@ -2,7 +2,7 @@ cnt_agents: &cnt_agents 4
cnt_tool_agents: &cnt_tool_agents 3
max_rounds: &max_rounds 5
max_criticizing_rounds: 3
-tool_config: &tool_config tools_simplified.json
+tool_config: &tool_config agentverse/tasks/tasksolving/tool_using/tools_simplified.json
task_description: I want to hold a party at somewhere around Tsinghua University tomorrow. I need you to look for some best places for holding a party nearby, and tell me whether the weather is good for holding a party tomorrow. Also, I want to know what activities can be considered in my party. Help me search the web.
diff --git a/agentverse/tasks/tasksolving/tool_using/sudoku/config.yaml b/agentverse/tasks/tasksolving/tool_using/sudoku/config.yaml
index e87547ca5..4d1202028 100644
--- a/agentverse/tasks/tasksolving/tool_using/sudoku/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/sudoku/config.yaml
@@ -2,7 +2,7 @@ cnt_agents: &cnt_agents 3
cnt_tool_agents: &cnt_tool_agents 2
max_rounds: &max_rounds 5
max_criticizing_rounds: 3
-tool_config: &tool_config tools_simplified.json
+tool_config: &tool_config agentverse/tasks/tasksolving/tool_using/tools_simplified.json
task_description: I've just heard an interesting game called 'sudoku'. Can you search for the rules of this game and the solution to this game? Finally, write a python script to automatically solve this game if possible.
diff --git a/tools_simplified.json b/agentverse/tasks/tasksolving/tool_using/tools_simplified.json
similarity index 100%
rename from tools_simplified.json
rename to agentverse/tasks/tasksolving/tool_using/tools_simplified.json
diff --git a/agentverse/tasks/tasksolving/tool_using/trending/config.yaml b/agentverse/tasks/tasksolving/tool_using/trending/config.yaml
index b1685b131..101612774 100644
--- a/agentverse/tasks/tasksolving/tool_using/trending/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/trending/config.yaml
@@ -2,7 +2,7 @@ cnt_agents: &cnt_agents 4
cnt_tool_agents: &cnt_tool_agents 3
max_rounds: &max_rounds 5
max_criticizing_rounds: 3
-tool_config: &tool_config tools_simplified.json
+tool_config: &tool_config agentverse/tasks/tasksolving/tool_using/tools_simplified.json
task_description: I'm currently analyzing what is popular on the website. Can you help me find the recent trending stuff. It could be anything, like trending news, products, books, movies, music, etc. Give a summarization for me.
diff --git a/agentverse/tasks/tasksolving/tool_using/vacation/config.yaml b/agentverse/tasks/tasksolving/tool_using/vacation/config.yaml
index 9f0c73c51..c10dd1ed0 100644
--- a/agentverse/tasks/tasksolving/tool_using/vacation/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/vacation/config.yaml
@@ -2,7 +2,7 @@ cnt_agents: &cnt_agents 4
cnt_tool_agents: &cnt_tool_agents 3
max_rounds: &max_rounds 5
max_criticizing_rounds: 3
-tool_config: &tool_config tools_simplified.json
+tool_config: &tool_config agentverse/tasks/tasksolving/tool_using/tools_simplified.json
task_description: I'm planning a two-week vacation to Japan next month. Help me plan my itinerary. I want to visit Tokyo, Kyoto, and Osaka. Look for the top tourist attractions in each city, and also suggest the best mode of travel between these cities. Additionally, find out the weather forecast for the month I'll be visiting.
diff --git a/scripts/__init__.py b/scripts/__init__.py
new file mode 100644
index 000000000..e69de29bb
diff --git a/evaluate_commongen.py b/scripts/evaluate_commongen.py
similarity index 100%
rename from evaluate_commongen.py
rename to scripts/evaluate_commongen.py
diff --git a/evaluate_logic.py b/scripts/evaluate_logic.py
similarity index 100%
rename from evaluate_logic.py
rename to scripts/evaluate_logic.py
diff --git a/evaluate_math.py b/scripts/evaluate_math.py
similarity index 100%
rename from evaluate_math.py
rename to scripts/evaluate_math.py
diff --git a/evaluate_responsegen.py b/scripts/evaluate_responsegen.py
similarity index 100%
rename from evaluate_responsegen.py
rename to scripts/evaluate_responsegen.py
diff --git a/test_pokemon_env.py b/test_pokemon_env.py
deleted file mode 100644
index b10e764e7..000000000
--- a/test_pokemon_env.py
+++ /dev/null
@@ -1,4 +0,0 @@
-import requests
-
-requests.post('http://127.0.0.1:10002/make_decision', headers={'Content-Type': 'application/json'}, json={'agent_ids': [0, 1, 2, 3, 4, 5]})
-# requests.post('http://127.0.0.1:10002/chat', headers={'Content-Type': 'application/json'}, json={'content': 'Hi!', 'receiver': 'May', 'receiver_id': 0, 'sender': 'Brendan'})
From 6d42bb6e2b58dd93e1f07fd4d5f4065c2b8cb933 Mon Sep 17 00:00:00 2001
From: chenweize1998
Date: Tue, 10 Oct 2023 16:42:30 +0800
Subject: [PATCH 016/101] bump version to 0.1.5
---
setup.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/setup.py b/setup.py
index 4d6f5f474..bef006aea 100644
--- a/setup.py
+++ b/setup.py
@@ -10,7 +10,7 @@
setuptools.setup(
name="agentverse",
- version="0.1.3",
+ version="0.1.5",
author="OpenBMB",
author_email="chenweize1998@gmail.com",
description="A versatile framework that streamlines the process of creating custom multi-agent environments for large language models (LLMs).",
From 7954f2161435d2884908dc07d287c08edd4d116a Mon Sep 17 00:00:00 2001
From: Weize Chen <32613237+chenweize1998@users.noreply.github.com>
Date: Tue, 10 Oct 2023 16:49:23 +0800
Subject: [PATCH 017/101] ci skip. Update README.md
---
README.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/README.md b/README.md
index 45f91e957..50b6536b4 100644
--- a/README.md
+++ b/README.md
@@ -33,7 +33,7 @@
**AgentVerse** offers a versatile framework that streamlines the process of creating custom multi-agent environments for large language models (LLMs). Designed to facilitate swift development and customization with minimal effort, our framework empowers researchers to concentrate on their research, rather than being bogged down by implementation details.
-⚠️⚠️⚠️ We're refactoring the code, and the goal is to provide a flexibility to construct simulation(without a predefined goal) and task-solving(with a specific goal) environments. Please note that this README is slightly outdated, we will update it soon. If you require a stable version that exclusively supports simulation environments, you can use [`release-1.0`](https://github.com/OpenBMB/AgentVerse/tree/release-1.0) branch.
+⚠️⚠️⚠️ We're refactoring the code, and the goal is to provide a flexibility to construct simulation(without a predefined goal) and task-solving(with a specific goal) environments. Please note that this README is slightly outdated, we will update it soon. If you require a stable version that exclusively supports simulation environments, you can use [`release-0.1`](https://github.com/OpenBMB/AgentVerse/tree/release-0.1) branch.
---
@@ -138,7 +138,7 @@ In the context of the text evaluation scenario, we recommend users explore the [
https://github.com/OpenBMB/AgentVerse/assets/75533759/58f33468-f15b-4bac-ae01-8d0780019f85
#### Pokemon
-**Currently available only in [`release-1.0`](https://github.com/OpenBMB/AgentVerse/tree/release-1.0)**. In the game, agents can walk around the game world, and interact with one another. As a player, you take on the role of an agent and can engage with others at any time. There are 6 characters in the Pokémon environment who appeared in Pokemon Emerald: [May](https://bulbapedia.bulbagarden.net/wiki/May_(game)), [Professor Birch](https://bulbapedia.bulbagarden.net/wiki/Professor_Birch), [Steven Stone](https://bulbapedia.bulbagarden.net/wiki/Steven_Stone), [Maxie](https://bulbapedia.bulbagarden.net/wiki/Maxie), [Archie](https://bulbapedia.bulbagarden.net/wiki/Archie) and [Joseph](https://bulbapedia.bulbagarden.net/wiki/Mr._Stone).
+**Currently available only in [`release-0.1`](https://github.com/OpenBMB/AgentVerse/tree/release-0.1)**. In the game, agents can walk around the game world, and interact with one another. As a player, you take on the role of an agent and can engage with others at any time. There are 6 characters in the Pokémon environment who appeared in Pokemon Emerald: [May](https://bulbapedia.bulbagarden.net/wiki/May_(game)), [Professor Birch](https://bulbapedia.bulbagarden.net/wiki/Professor_Birch), [Steven Stone](https://bulbapedia.bulbagarden.net/wiki/Steven_Stone), [Maxie](https://bulbapedia.bulbagarden.net/wiki/Maxie), [Archie](https://bulbapedia.bulbagarden.net/wiki/Archie) and [Joseph](https://bulbapedia.bulbagarden.net/wiki/Mr._Stone).
To launch the Pokemon game, first launch a local server with the following command:
```bash
From 7c8dbca6a05653b65fcd3f683862f1783e98c7a7 Mon Sep 17 00:00:00 2001
From: Weize Chen <32613237+chenweize1998@users.noreply.github.com>
Date: Tue, 10 Oct 2023 16:54:15 +0800
Subject: [PATCH 018/101] ci skip. Update README.md
---
README.md | 20 +++++++++++++-------
1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/README.md b/README.md
index 50b6536b4..de16a5b4f 100644
--- a/README.md
+++ b/README.md
@@ -93,7 +93,7 @@ In the NLP class, the professor and students engage in interactive communication
Use the following command to launch the NLP Classroom example:
```bash
-python main_simulation_gui.py --task simulation/nlp_classroom_9players
+python agentverse_command/main_simulation_gui.py --task simulation/nlp_classroom_9players
```
https://github.com/OpenBMB/AgentVerse/assets/11704492/6ea07850-595e-4a28-a82e-f863011353c2
@@ -104,7 +104,7 @@ A prisoner's Dilemma is a thought experiment that challenges two completely rati
Use the following command to launch the Prisoner Dilemma example:
```bash
-python main_simulation_cli.py --task simulation/prisoner_dilemma
+python agentverse_command/main_simulation_gui.py --task simulation/prisoner_dilemma
```
https://github.com/OpenBMB/AgentVerse/assets/11704492/017c46e5-c738-4fca-9352-b008e2d518bd
@@ -115,7 +115,7 @@ In the Software Design example, a code writer, a code tester and a code reviewer
Use the following command to launch the Software Design example:
```bash
-python main_demo.py --task simulation/sde_team/sde_team_2players
+python agentverse_command/main_simulation_gui.py --task simulation/sde_team/sde_team_2players
```
https://github.com/OpenBMB/AgentVerse/assets/11704492/5058066a-abee-490d-8659-b4e54661626a
@@ -126,7 +126,7 @@ https://github.com/OpenBMB/AgentVerse/assets/11704492/5058066a-abee-490d-8659-b4
In the database diagnosis scenario, the Chief DBA monitors the system anomalies (e.g., slow queries, locks, crash down). If detected, the domain experts are alerted to analyze root causes, share insights, and suggest optimization solutions together. The Chief DBA then provides a summarized report to the user.
```bash
-python main_simulation_gui.py --task simulation/db_diag
+python agentverse_command/main_simulation_gui.py --task simulation/db_diag
```
https://github.com/OpenBMB/AgentVerse/assets/11704492/c633419d-afbb-47d4-bb12-6bb512e7af3a
@@ -247,7 +247,9 @@ python setup.py develop
You can create a multi-agent environments provided by us. Using the classroom scenario as an example. In this scenario, there are nine agents, one playing the role of a professor and the other eight as students.
```shell
-python3 main_simulation_cli.py --task simulation/nlp_classroom_9players
+python3 agentverse_command/main_simulation_cli.py --task simulation/nlp_classroom_9players
+# or if you have installed AgentVerse via pip
+agentverse-simulation --task simulation/nlp_classroom_9players
```
### Simulation Local Website Demo
@@ -255,7 +257,9 @@ python3 main_simulation_cli.py --task simulation/nlp_classroom_9players
We also provide a local website demo for this environment. You can launch it with
```shell
-python3 main_simulation_gui.py --task simulation/nlp_classroom_9players
+python3 agentverse_command/main_simulation_gui.py --task simulation/nlp_classroom_9players
+# or if you have installed AgentVerse via pip
+agentverse-simulation-gui --task simulation/nlp_classroom_9players
```
After successfully launching the local server, you can visit [http://127.0.0.1:7860/](http://127.0.0.1:7860/) to view the classroom environment.
@@ -265,7 +269,9 @@ To run the experiments with the task-solving environment proposed in our [paper]
```shell
# Run the Humaneval benchmark using gpt-3.5-turbo
-python3 main_tasksolving_cli.py --task tasksolving/humaneval/gpt-3.5 --dataset_path data/humaneval/test.jsonl --overwrite
+python3 agentverse_command/main_tasksolving_cli.py --task tasksolving/humaneval/gpt-3.5 --dataset_path data/humaneval/test.jsonl --overwrite
+# or if you have installed AgentVerse via pip
+agentverse-tasksolving --task tasksolving/humaneval/gpt-3.5 --dataset_path data/humaneval/test.jsonl --overwrite
```
You can take a look at `agentverse/tasks/tasksolving` for more experiments we have done in our paper.
From 63b2d33dbd063910cd94b6b9671fa178ad931154 Mon Sep 17 00:00:00 2001
From: Weize Chen <32613237+chenweize1998@users.noreply.github.com>
Date: Tue, 10 Oct 2023 17:28:19 +0800
Subject: [PATCH 019/101] ci skip. Update README.md
---
README.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/README.md b/README.md
index de16a5b4f..4e448d6cb 100644
--- a/README.md
+++ b/README.md
@@ -46,7 +46,7 @@
- 🛠 **Tools (Plugins) Utilization**: AgentVerse supports the multi-agent environments with tools. Currently, AgentVerse supports tools provided in [BMTools](https://github.com/OpenBMB/BMTools).
## 📰 What's New
-- [2023/10/5] 💡 We release the code of our paper [AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents](https://arxiv.org/abs/2308.10848), and refactor our codebase to enable the creation of both simulation and task-solving environment!
+- [2023/10/5] 💡 We release the code of our paper [AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents](https://arxiv.org/abs/2308.10848), and refactor our codebase to enable the creation of both simulation and task-solving environment! We have placed the code for Minecraft example in the paper at the [`minecraft`](https://github.com/OpenBMB/AgentVerse/tree/minecraft) branch. Our tool-using example will soon be updated to the `main` branch. Stay tuned!
- [2023/8/22] 📝 We're excited to share our work-in-progress paper [AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents](https://arxiv.org/abs/2308.10848) related to this repository.
@@ -69,7 +69,7 @@ AgentVerse is on a mission to revolutionize the multi-agent environment for larg
Also, if you're passionate about advancing the frontiers of multi-agent environments and are eager to dive deeper into research, we invite you to join our team at THUNLP. To explore this exciting opportunity and embark on a collaborative journey with us, please reach out to [chenweize1998@gmail.com](chenweize1998@gmail.com) and [yushengsu.thu@gmail.com](yushengsu.thu@gmail.com) and express your interest. We're keen to welcome motivated individuals like you to our lab!
-👉Also, our Discord: https://discord.gg/cnutfCtC.
+👉Also, check our Discord: https://discord.gg/cnutfCtC.
## 🗓 Coming Soon
- [x] Code release of our [paper](https://arxiv.org/abs/2308.10848)
From c4205bd3b7f2a5eac0d3229512bc509655b32dc9 Mon Sep 17 00:00:00 2001
From: chenweize1998
Date: Wed, 11 Oct 2023 00:13:46 +0800
Subject: [PATCH 020/101] manually merge #19.
---
agentverse/memory_manipulator/__init__.py | 1 +
agentverse/memory_manipulator/plan.py | 79 +++++++++++++++++++
.../tasks/simulation/alice_home/config.yaml | 4 +-
3 files changed, 82 insertions(+), 2 deletions(-)
create mode 100644 agentverse/memory_manipulator/plan.py
diff --git a/agentverse/memory_manipulator/__init__.py b/agentverse/memory_manipulator/__init__.py
index 8ea0e7c68..5d28698a8 100644
--- a/agentverse/memory_manipulator/__init__.py
+++ b/agentverse/memory_manipulator/__init__.py
@@ -5,4 +5,5 @@
from .base import BaseMemoryManipulator
from .basic import BasicMemoryManipulator
from .reflection import Reflection
+from .plan import Plan
diff --git a/agentverse/memory_manipulator/plan.py b/agentverse/memory_manipulator/plan.py
new file mode 100644
index 000000000..530190c0f
--- /dev/null
+++ b/agentverse/memory_manipulator/plan.py
@@ -0,0 +1,79 @@
+from __future__ import annotations
+
+from logging import getLogger
+from typing import List, TYPE_CHECKING
+
+from . import memory_manipulator_registry
+from .base import BaseMemoryManipulator
+from ..message import Message
+
+if TYPE_CHECKING:
+ from agentverse.memory import VectorStoreMemory
+ from agentverse.agents.reflection_agent import ReflectionAgent
+
+logger = getLogger(__file__)
+
+PLAN_PROMPT = """Now you are act for as an agent named ${agent_name} in a virtual world.
+You might need to performing reaction to the observation.
+Based on the following information:
+(1) The agent's description: ${role_description}
+(2) Current time is ${current_time}
+(3) Your history memory is ${chat_history}
+
+Now is ${current_time}. If all plans are expired, you have to plan for\
+the next time periods.
+Do you need to generate new plans?
+If yes, tell me the new plan, including the time period.
+If no, just tell me No."""
+
+
+@memory_manipulator_registry.register("plan")
+class Plan(BaseMemoryManipulator):
+ """
+ Memory manipulator for plan.
+ """
+ memory: VectorStoreMemory = None
+ agent: ReflectionAgent = None # specify ReflectionAgent
+ # later considering removing current_time to be more general
+ # and then change to BaseAgent
+ plan: List[str] = []
+
+ def manipulate_memory(self) -> str:
+ """
+ Generate new plans
+ """
+ prompt = self._fill_prompt_template()
+ result = self.agent.llm.generate_response(prompt).content
+ result = result.strip('.')
+ logger.info(f"{self.agent.name}'s new plan: {result}")
+ if result == "No":
+ return ""
+ else:
+ self.plan.append(result)
+ plan_message = Message(
+ content=result,
+ sender=self.agent.name,
+ receiver={self.agent.name})
+ self.agent.memory.add_message([plan_message])
+ return result
+
+
+ def _fill_prompt_template(self) -> str:
+ """Fill the placeholders in the prompt template
+
+ In the conversation agent, three placeholders are supported:
+ - ${agent_name}: the name of the agent
+ - ${env_description}: the description of the environment
+ - ${role_description}: the description of the role of the agent
+ - ${chat_history}: the chat history of the agent
+ """
+ input_arguments = {
+ "agent_name": self.agent.name,
+ "role_description": self.agent.role_description,
+ "chat_history": self.agent.memory.to_string(add_sender_prefix=True),
+ "current_time": self.agent.current_time,
+ }
+ return PLAN_PROMPT.format(**input_arguments)
+
+ def reset(self) -> None:
+ pass
diff --git a/agentverse/tasks/simulation/alice_home/config.yaml b/agentverse/tasks/simulation/alice_home/config.yaml
index 1c79377da..8147ab38e 100644
--- a/agentverse/tasks/simulation/alice_home/config.yaml
+++ b/agentverse/tasks/simulation/alice_home/config.yaml
@@ -77,7 +77,7 @@ agents:
memory:
memory_type: vectorstore
memory_manipulator:
- memory_manipulator_type: reflection
+ memory_manipulator_type: plan
prompt_template: *prompt
llm:
model: "gpt-4"
@@ -126,4 +126,4 @@ agents:
type: alice_home
current_time: "2023-04-01 07:00:00"
-tools: ~
\ No newline at end of file
+tools: ~
From 9b03c91fd5339ca4e5e5550b54189eb9b157e8c8 Mon Sep 17 00:00:00 2001
From: chenweize1998
Date: Wed, 11 Oct 2023 18:46:25 +0800
Subject: [PATCH 021/101] fix: format error in some benchmark configs
---
.../tasksolving/brainstorming/config.yaml | 21 +-
.../tasksolving/humaneval/gpt-3.5/config.yaml | 19 +-
.../tasksolving/humaneval/gpt-4/config.yaml | 19 +-
.../tasksolving/pythoncalculator/config.yaml | 21 +-
.../tasks/tasksolving/responsegen/config.yaml | 192 ------------------
agentverse_command/benchmark.py | 6 +-
6 files changed, 45 insertions(+), 233 deletions(-)
delete mode 100644 agentverse/tasks/tasksolving/responsegen/config.yaml
diff --git a/agentverse/tasks/tasksolving/brainstorming/config.yaml b/agentverse/tasks/tasksolving/brainstorming/config.yaml
index 048c693bc..82be06bcd 100644
--- a/agentverse/tasks/tasksolving/brainstorming/config.yaml
+++ b/agentverse/tasks/tasksolving/brainstorming/config.yaml
@@ -101,16 +101,17 @@ name: pipeline
environment:
env_type: task-basic
max_turn: *max_turn
- role_assigner:
- type: role_description
- cnt_agents: *cnt_agents
- decision_maker:
- type: brainstorming
- max_inner_turns: *max_inner_turns
- executor:
- type: none
- evaluator:
- type: basic
+ rule:
+ role_assigner:
+ type: role_description
+ cnt_agents: *cnt_agents
+ decision_maker:
+ type: brainstorming
+ max_inner_turns: *max_inner_turns
+ executor:
+ type: none
+ evaluator:
+ type: basic
agents:
- #role_assigner_agent:
diff --git a/agentverse/tasks/tasksolving/humaneval/gpt-3.5/config.yaml b/agentverse/tasks/tasksolving/humaneval/gpt-3.5/config.yaml
index 78b91f2c9..8b221f23c 100644
--- a/agentverse/tasks/tasksolving/humaneval/gpt-3.5/config.yaml
+++ b/agentverse/tasks/tasksolving/humaneval/gpt-3.5/config.yaml
@@ -129,15 +129,16 @@ name: pipeline
environment:
env_type: task-basic
max_turn: *max_turn
- role_assigner:
- type: role_description
- cnt_agents: *cnt_agents
- decision_maker:
- type: vertical-solver-first
- executor:
- type: none
- evaluator:
- type: basic
+ rule:
+ role_assigner:
+ type: role_description
+ cnt_agents: *cnt_agents
+ decision_maker:
+ type: vertical-solver-first
+ executor:
+ type: none
+ evaluator:
+ type: basic
agents:
- #role_assigner_agent:
diff --git a/agentverse/tasks/tasksolving/humaneval/gpt-4/config.yaml b/agentverse/tasks/tasksolving/humaneval/gpt-4/config.yaml
index 83d6dd458..2525b4e6b 100644
--- a/agentverse/tasks/tasksolving/humaneval/gpt-4/config.yaml
+++ b/agentverse/tasks/tasksolving/humaneval/gpt-4/config.yaml
@@ -125,15 +125,16 @@ name: pipeline
environment:
env_type: task-basic
max_turn: *max_turn
- role_assigner:
- type: role_description
- cnt_agents: *cnt_agents
- decision_maker:
- type: vertical-solver-first
- executor:
- type: code-test
- evaluator:
- type: basic
+ rule:
+ role_assigner:
+ type: role_description
+ cnt_agents: *cnt_agents
+ decision_maker:
+ type: vertical-solver-first
+ executor:
+ type: code-test
+ evaluator:
+ type: basic
agents:
- #role_assigner_agent:
diff --git a/agentverse/tasks/tasksolving/pythoncalculator/config.yaml b/agentverse/tasks/tasksolving/pythoncalculator/config.yaml
index 231aacf55..8615fd82c 100644
--- a/agentverse/tasks/tasksolving/pythoncalculator/config.yaml
+++ b/agentverse/tasks/tasksolving/pythoncalculator/config.yaml
@@ -99,16 +99,17 @@ name: pipeline
environment:
env_type: task-basic
max_turn: *max_turn
- role_assigner:
- type: role_description
- cnt_agents: *cnt_agents
- decision_maker:
- type: vertical-solver-first
- max_inner_turns: *max_inner_turns
- executor:
- type: none
- evaluator:
- type: basic
+ rule:
+ role_assigner:
+ type: role_description
+ cnt_agents: *cnt_agents
+ decision_maker:
+ type: vertical-solver-first
+ max_inner_turns: *max_inner_turns
+ executor:
+ type: none
+ evaluator:
+ type: basic
agents:
- #role_assigner_agent:
diff --git a/agentverse/tasks/tasksolving/responsegen/config.yaml b/agentverse/tasks/tasksolving/responsegen/config.yaml
deleted file mode 100644
index b762d7c62..000000000
--- a/agentverse/tasks/tasksolving/responsegen/config.yaml
+++ /dev/null
@@ -1,192 +0,0 @@
-cnt_critic_agents: 3
-max_loop_rounds: &max_loop_rounds 5
-max_criticizing_rounds: 3
-human_eval: false
-evaluation_dimensions: |-
-
-prompts:
- role_assigner_prompt: &role_assigner_prompt |-
- # Role Description
- You are the leader of a group of experts, now you need to generate a response based on the text:
- ${task_description}
-
- You can recruit ${cnt_critic_agents} expert in different fields. What experts will you recruit to better generate an accurate solution?
-
- # Response Format Guidance
- You should respond with a list of expert description. For example:
- 1. an electrical engineer specified in the filed of xxx
- 2. an economist who is good at xxx
- 3. a lawyer with a good knowledge of xxx
- ...
-
- You don't have to give the reason.
-
- solver_prompt: &solver_prompt |-
- # Problem
- You need to generate a response based on the text:
- ${task_description}
-
- # Previous Solution
- The solution you gave in the last step is:
- ${former_solution}
-
- # Critics
- Critics in the group gave the following opinions:
- ${critic_opinions}
-
- # Your Task
- Now based upon the former solution and the critics' opinions, please give a new solution.
- Your solution should contain only your response beginning with "System: ".
- Do not give any additional information.
-
- critic_prompt: &critic_prompt |-
- # Role Description and Problem to Solve
- You are ${role_description}. You are in a discussion group, aiming to generate a response based on the text:
- ${task_description}
-
- # Preliminary Solution
- Now the group gives a preliminary solution as follows:
- ${preliminary_solution}
-
- # Advice
- Meanwhile, another expert gave the following advice on the solution:
- ${advice}
-
- # Response Format Guidance
- - If you thinks the preliminary solution is perfect, respond using the following format:
- Action: Agree
- Action Input: Agree.
- (Do not output your reason for agreeing!)
-
- - If you think it is flawed, give your advice use the following output format:
- Action: Disagree
- Action Input: (explain why you disagree)
-
- # Your Task
- Based on your knowledge in your field, do you agree that this solution is the best response based on the text?
-
- evaluator_prompt: &evaluator_prompt |-
- # Role Description
- You are an experienced dialogue teacher. As a good teacher, you carefully check the correctness of the given response based on the text. When the solution is flawed, you should patiently teach the students how to give better response.
-
- # Response Format Guidance
- You must respond in the following format:
- Interesting: (a score between 0 and 9)
- Engaging: (a score between 0 and 9)
- Specific: (a score between 0 and 9)
- Relevant: (a score between 0 and 9)
- Semantically Appropriate: (a score between 0 and 9)
- Understandable: (a score between 0 and 9)
- Fluent: (a score between 0 and 9)
- Overall Impression: (a score between 0 and 9)
- Advice: (your advice on how to correct the solution)
-
- # Problem and Student's Solution
- Problem: ${task_description}
- Student's Solution: ${solution}
-
- # Your Task
- Now carefully check the student's solution, and give your response.
-
-
-name: pipeline
-
-
-environment:
- env_type: task-basic
- max_loop_rounds: *max_loop_rounds
- rule:
- order:
- type: sequential
- visibility:
- type: all
- selector:
- type: basic
- updater:
- type: basic
- describer:
- type: basic
-
-agents:
- - #role_assigner_agent:
- agent_type: role_assigner
- name: role assigner
- prompt_template: *role_assigner_prompt
- memory:
- memory_type: chat_history
- llm:
- llm_type: gpt-3.5-turbo
- model: "gpt-3.5-turbo"
- temperature: 0
- max_tokens: 256
- output_parser:
- type: role_assigner
-
- - #solver_agent:
- agent_type: solver
- name: Planner
- prompt_template: [*solver_prompt, ""]
- memory:
- memory_type: chat_history
- llm:
- llm_type: gpt-3.5-turbo
- model: "gpt-3.5-turbo"
- temperature: 0
- max_tokens: 512
-
- - #critic_agents:
- agent_type: critic
- name: Critic 1
- role_description: |-
- Waiting to be assigned.
- prompt_template: *critic_prompt
- memory:
- memory_type: chat_history
- llm:
- llm_type: gpt-3.5-turbo
- model: "gpt-3.5-turbo"
- temperature: 0
- max_tokens: 256
- output_parser:
- type: responsegen-critic
-
- - #executor_agent:
- agent_type: executor
- name: Executor
- prompt_template: None
- memory:
- memory_type: chat_history
- llm:
- llm_type: gpt-3.5-turbo
- model: "gpt-3.5-turbo"
- temperature: 0
- max_tokens: 512
-
- - #evaluator_agent:
- agent_type: evaluator
- name: Evaluator
- role_description: |-
- Evaluator
- prompt_template: *evaluator_prompt
- memory:
- memory_type: chat_history
- llm:
- llm_type: gpt-3.5-turbo
- model: "gpt-3.5-turbo"
- temperature: 0
- max_tokens: 512
- output_parser:
- type: responsegen-evaluator
- dimensions:
- - Interesting
- - Engaging
- - Specific
- - Relevant
- - Semantically Appropriate
- - Understandable
- - Fluent
- - Overall Impression
-
-
-tools:
-
diff --git a/agentverse_command/benchmark.py b/agentverse_command/benchmark.py
index e9da7cf0a..8ff53333a 100644
--- a/agentverse_command/benchmark.py
+++ b/agentverse_command/benchmark.py
@@ -62,12 +62,12 @@ def cli_main():
assert args.tool_tmp_path is not None
with open(args.tool_tmp_path, "w") as f:
f.write(json.dumps(example["tools"]))
- agentversepipeline = TaskSolving.from_task(args.task, args.tasks_dir)
- agentversepipeline.environment.set_task_description(example["input"])
+ agentverse = TaskSolving.from_task(args.task, args.tasks_dir)
+ agentverse.environment.set_task_description(example["input"])
# print(args.single_agent)
# print(args.discussion_mode)
# exit()
- plan, result, logs = agentversepipeline.run()
+ plan, result, logs = agentverse.run()
f.write(
json.dumps(
{
From 02086999362b100d31aa8c4d57843606f164a50e Mon Sep 17 00:00:00 2001
From: chenweize1998
Date: Wed, 11 Oct 2023 18:51:10 +0800
Subject: [PATCH 022/101] fix: update MANIFEST.in [ci skip]
---
MANIFEST.in | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/MANIFEST.in b/MANIFEST.in
index f72f5a0dc..f9aa599fc 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -1,2 +1,2 @@
-include agentverse/tasks/*/*
include agentverse/tasks/*/*/*
+include agentverse/tasks/*/*/*/*
From 88744f1089f689bcebb6c9aaa4c9fcc9f1aee549 Mon Sep 17 00:00:00 2001
From: mlmz <54172054+minleminzui@users.noreply.github.com>
Date: Wed, 11 Oct 2023 19:43:33 +0800
Subject: [PATCH 023/101] fix: arrange output parser classes (#59)
* fix: arrange output parser classes
Put all the output parser classes in one place.
Remove redundant output parser classes.
Reuse output parser classes with the same functionality.
* fix: arrange output parser classes
Put all the output parser classes in one place.
Remove redundant output parser classes.
Reuse output parser classes with the same functionality.
* fix: output parser error
* fix: output parser error
---------
Co-authored-by: chenweize1998
---
agentverse/__init__.py | 5 +-
agentverse/agents/base.py | 2 +-
.../agents/simulation_agent/reflection.py | 2 +-
agentverse/initialization.py | 2 +-
agentverse/output_parser/__init__.py | 5 +
agentverse/output_parser/output_parser.py | 621 ++++++++++++++++++
agentverse/parser.py | 25 -
agentverse/tasks/__init__.py | 28 +-
.../simulation/alice_home/output_parser.py | 29 -
.../tasks/simulation/db_diag/output_parser.py | 40 --
.../math_problem_2players_tools/config.yaml | 10 +-
.../output_parser.py | 32 -
.../nlp_classroom_3players/output_parser.py | 31 -
.../output_parser.py | 39 --
.../nlp_classroom_9players/output_parser.py | 36 -
.../output_parser.py | 34 -
.../tasks/simulation/pokemon/output_parser.py | 36 -
.../prisoner_dilemma/output_parser.py | 74 ---
.../sde_team_2players/output_parser.py | 19 -
.../sde_team_3players/output_parser.py | 19 -
.../brainstorming/output_parser.py | 77 ---
.../tasksolving/commongen/output_parser.py | 93 ---
.../tasksolving/humaneval/output_parser.py | 311 ---------
.../tasksolving/logic_grid/output_parser.py | 13 -
.../tasks/tasksolving/mgsm/output_parser.py | 166 -----
.../pythoncalculator/output_parser.py | 19 -
.../tasksolving/responsegen/output_parser.py | 104 ---
.../tool_using/24point/config.yaml | 2 +-
.../tasksolving/tool_using/bmi/config.yaml | 2 +-
.../tool_using/bookclub/config.yaml | 2 +-
.../tasksolving/tool_using/car/config.yaml | 2 +-
.../tasksolving/tool_using/date/config.yaml | 2 +-
.../tasksolving/tool_using/diy/config.yaml | 2 +-
.../tasksolving/tool_using/output_parser.py | 77 ---
.../tasksolving/tool_using/party/config.yaml | 2 +-
.../tasksolving/tool_using/sudoku/config.yaml | 2 +-
.../tool_using/trending/config.yaml | 2 +-
.../tool_using/vacation/config.yaml | 2 +-
38 files changed, 646 insertions(+), 1323 deletions(-)
create mode 100644 agentverse/output_parser/__init__.py
create mode 100644 agentverse/output_parser/output_parser.py
delete mode 100644 agentverse/parser.py
delete mode 100644 agentverse/tasks/simulation/alice_home/output_parser.py
delete mode 100644 agentverse/tasks/simulation/db_diag/output_parser.py
delete mode 100644 agentverse/tasks/simulation/math_problem_2players_tools/output_parser.py
delete mode 100644 agentverse/tasks/simulation/nlp_classroom_3players/output_parser.py
delete mode 100644 agentverse/tasks/simulation/nlp_classroom_3players_withtool/output_parser.py
delete mode 100644 agentverse/tasks/simulation/nlp_classroom_9players/output_parser.py
delete mode 100644 agentverse/tasks/simulation/nlp_classroom_9players_group/output_parser.py
delete mode 100644 agentverse/tasks/simulation/pokemon/output_parser.py
delete mode 100644 agentverse/tasks/simulation/prisoner_dilemma/output_parser.py
delete mode 100644 agentverse/tasks/simulation/sde_team/sde_team_2players/output_parser.py
delete mode 100644 agentverse/tasks/simulation/sde_team/sde_team_3players/output_parser.py
delete mode 100644 agentverse/tasks/tasksolving/brainstorming/output_parser.py
delete mode 100644 agentverse/tasks/tasksolving/commongen/output_parser.py
delete mode 100644 agentverse/tasks/tasksolving/humaneval/output_parser.py
delete mode 100644 agentverse/tasks/tasksolving/logic_grid/output_parser.py
delete mode 100644 agentverse/tasks/tasksolving/mgsm/output_parser.py
delete mode 100644 agentverse/tasks/tasksolving/pythoncalculator/output_parser.py
delete mode 100644 agentverse/tasks/tasksolving/responsegen/output_parser.py
delete mode 100644 agentverse/tasks/tasksolving/tool_using/output_parser.py
diff --git a/agentverse/__init__.py b/agentverse/__init__.py
index af88fdd9e..4512a99dd 100644
--- a/agentverse/__init__.py
+++ b/agentverse/__init__.py
@@ -1,7 +1,4 @@
-from .tasks import *
-
-
-# from .agents import Agent
+from .output_parser import output_parser_registry
from .environments import env_registry
from .environments.simulation_env.rules.order import order_registry
from .environments.simulation_env.rules.describer import describer_registry
diff --git a/agentverse/agents/base.py b/agentverse/agents/base.py
index 084bc5135..4f118e08c 100644
--- a/agentverse/agents/base.py
+++ b/agentverse/agents/base.py
@@ -8,7 +8,7 @@
from agentverse.llms import BaseLLM
from agentverse.memory import BaseMemory, ChatHistoryMemory
from agentverse.message import Message
-from agentverse.parser import OutputParser
+from agentverse.output_parser import OutputParser
from agentverse.memory_manipulator import BaseMemoryManipulator
diff --git a/agentverse/agents/simulation_agent/reflection.py b/agentverse/agents/simulation_agent/reflection.py
index e9303032b..bbbcf9f10 100644
--- a/agentverse/agents/simulation_agent/reflection.py
+++ b/agentverse/agents/simulation_agent/reflection.py
@@ -14,7 +14,7 @@
from agentverse.llms import BaseLLM
from agentverse.memory import BaseMemory, ChatHistoryMemory
from agentverse.message import Message
-from agentverse.parser import OutputParser
+from agentverse.output_parser import OutputParser
from agentverse.message import Message
from agentverse.agents.base import BaseAgent
diff --git a/agentverse/initialization.py b/agentverse/initialization.py
index ed21733e0..13ef54e77 100644
--- a/agentverse/initialization.py
+++ b/agentverse/initialization.py
@@ -19,7 +19,7 @@
from agentverse.memory import memory_registry
from agentverse.memory_manipulator import memory_manipulator_registry
-from agentverse.parser import output_parser_registry
+from agentverse.output_parser import output_parser_registry
if TYPE_CHECKING:
from agentverse.agents import BaseAgent
diff --git a/agentverse/output_parser/__init__.py b/agentverse/output_parser/__init__.py
new file mode 100644
index 000000000..8b54c8edd
--- /dev/null
+++ b/agentverse/output_parser/__init__.py
@@ -0,0 +1,5 @@
+from agentverse.registry import Registry
+
+output_parser_registry = Registry(name="OutputParserRegistry")
+
+from .output_parser import *
\ No newline at end of file
diff --git a/agentverse/output_parser/output_parser.py b/agentverse/output_parser/output_parser.py
new file mode 100644
index 000000000..556d9ff6e
--- /dev/null
+++ b/agentverse/output_parser/output_parser.py
@@ -0,0 +1,621 @@
+from __future__ import annotations
+
+import re
+from abc import abstractmethod
+import json
+from typing import Union, List, Tuple, NamedTuple, TYPE_CHECKING
+
+from . import output_parser_registry
+
+from agentverse.utils import AgentAction, AgentFinish, AgentCriticism
+
+from agentverse.llms import LLMResult
+from agentverse.logging import logger
+
+from pydantic import BaseModel
+
+if TYPE_CHECKING:
+ from agentverse.agents.base import BaseAgent
+ from agentverse.environments.base import BaseEnvironment
+
+class OutputParserError(Exception):
+ """Exception raised when parsing output from a command fails."""
+
+ def __init__(self, message):
+ self.message = message
+
+ def __str__(self):
+ return "Failed to parse output of the model:%s\n " % self.message
+
+
+class OutputParser(BaseModel):
+ """Base class for output parsers."""
+
+ @abstractmethod
+ def parse(self, output: LLMResult) -> NamedTuple:
+ pass
+
+
+@output_parser_registry.register("alice_home")
+class AliceHomeParser(OutputParser):
+ def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
+ text = output.content
+ cleaned_output = text.strip()
+ cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
+ cleaned_output = cleaned_output.split("\n")
+ if not (
+ len(cleaned_output) == 2
+ and cleaned_output[0].startswith("Thought:")
+ and cleaned_output[1].startswith("Action:")
+ ):
+ raise OutputParserError(text)
+
+ action = cleaned_output[1][len("Action:") :].strip()
+
+ return AgentFinish({"output": action}, text)
+
+
+@output_parser_registry.register("db_diag")
+@output_parser_registry.register("nlp_classroom_3players_withtool")
+class CommonParser1(OutputParser):
+ def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
+ text = output.content
+ cleaned_output = text.strip()
+ cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
+ cleaned_output = cleaned_output.split("\n")
+ if not (
+ len(cleaned_output) == 3
+ and cleaned_output[0].startswith("Thought:")
+ and cleaned_output[1].startswith("Action:")
+ and cleaned_output[2].startswith("Action Input:")
+ ):
+ raise OutputParserError(text)
+ action = cleaned_output[1][len("Action:") :].strip()
+ action_input = cleaned_output[2][len("Action Input:") :].strip()
+ if action in ["Speak"]:
+ return AgentFinish({"output": action_input}, text)
+ elif action == "CallOn":
+ return AgentFinish({"output": "[CallOn] " + action_input}, text)
+ elif action == "RaiseHand":
+ return AgentFinish({"output": "[RaiseHand] " + action_input}, text)
+ elif action == "Listen":
+ return AgentFinish({"output": ""}, text)
+ else:
+ return AgentAction(action.lower(), action_input, text)
+
+
+@output_parser_registry.register("math_problem_2players_tools")
+class MathProblem2PlayersToolsParser(OutputParser):
+ def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
+ text = output.content
+ cleaned_output = text.strip()
+ cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
+ cleaned_output = cleaned_output.split("\n")
+ if not (
+ len(cleaned_output) == 2
+ and cleaned_output[0].startswith("Action:")
+ and cleaned_output[1].startswith("Action Input:")
+ ):
+ raise OutputParserError(text)
+ action = cleaned_output[0][len("Action:") :].strip()
+ action_input = cleaned_output[1][len("Action Input:") :].strip()
+ if action == "Speak":
+ return AgentFinish({"output": action_input}, text)
+ else:
+ return AgentAction(action, action_input, text)
+
+
+@output_parser_registry.register("nlp_classroom_3players")
+class NlpClassroom3PlayersParser(OutputParser):
+ def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
+ text = output.content
+ cleaned_output = text.strip()
+ cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
+ cleaned_output = cleaned_output.split("\n")
+ if not (
+ len(cleaned_output) == 2
+ and cleaned_output[0].startswith("Action:")
+ and cleaned_output[1].startswith("Action Input:")
+ ):
+ raise OutputParserError(text)
+ action = cleaned_output[0][len("Action:") :].strip()
+ action_input = cleaned_output[1][len("Action Input:") :].strip()
+ if action == "Speak":
+ return AgentFinish({"output": action_input}, text)
+ else:
+ raise OutputParserError(text)
+
+
+@output_parser_registry.register("nlp_classroom_9players")
+class NlpClassroom9PlayersParser(OutputParser):
+ def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
+ text = output.content
+ cleaned_output = text.strip()
+ cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
+ cleaned_output = cleaned_output.split("\n")
+ if not (
+ len(cleaned_output) == 2
+ and cleaned_output[0].startswith("Action:")
+ and cleaned_output[1].startswith("Action Input:")
+ ):
+ raise OutputParserError(text)
+ action = cleaned_output[0][len("Action:") :].strip()
+ action_input = cleaned_output[1][len("Action Input:") :].strip()
+ if action == "Speak":
+ return AgentFinish({"output": action_input}, text)
+ elif action == "CallOn":
+ return AgentFinish({"output": "[CallOn] " + action_input}, text)
+ elif action == "RaiseHand":
+ return AgentFinish({"output": "[RaiseHand] " + action_input}, text)
+ elif action == "Listen":
+ return AgentFinish({"output": ""}, text)
+ else:
+ return AgentAction(action, action_input, text)
+
+
+@output_parser_registry.register("nlp_classroom_9players_group")
+class NlpClassroom9PlayersGroupParser(OutputParser):
+ def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
+ text = output.content
+ cleaned_output = text.strip()
+ cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
+ cleaned_output = cleaned_output.split("\n")
+ if not (
+ len(cleaned_output) == 2
+ and cleaned_output[0].startswith("Action:")
+ and cleaned_output[1].startswith("Action Input:")
+ ):
+ raise OutputParserError(text)
+ action = cleaned_output[0][len("Action:") :].strip()
+ action_input = cleaned_output[1][len("Action Input:") :].strip()
+ if action == "Speak":
+ return AgentFinish({"output": action_input}, text)
+ elif action in ["CallOn", "RaiseHand", "GroupDiscuss"]:
+ return AgentFinish({"output": f"[{action}] {action_input}"}, text)
+ elif action == "Listen":
+ return AgentFinish({"output": ""}, text)
+ else:
+ return AgentAction(action, action_input, text)
+
+
+@output_parser_registry.register("pokemon")
+class PokemonParser(OutputParser):
+ def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
+ text = output.content
+ cleaned_output = text.strip()
+ cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
+ cleaned_output = cleaned_output.split("\n")
+ if not (
+ len(cleaned_output) == 3
+ and cleaned_output[0].startswith("Thought:")
+ and cleaned_output[1].startswith("Action:")
+ and cleaned_output[2].startswith("Action Input:")
+ ):
+ raise OutputParserError(text)
+ action = cleaned_output[1][len("Action:") :].strip()
+ action_input = cleaned_output[2][len("Action Input:") :].strip()
+ try:
+ action_input = json.loads(action_input)
+ except json.JSONDecodeError:
+ raise OutputParserError(text)
+ action_input["action"] = action
+ return AgentFinish({"output": json.dumps(action_input)}, text)
+
+
+@output_parser_registry.register("prisoner_dilemma")
+class PrisonerDilemmaParser(OutputParser):
+ # make sure 1 1 2 2 3 3
+ cur_round: int = 1
+ encounter_cur_round: bool = False
+
+ def parse(
+ self, agent: "BaseAgent", environment: "BaseEnvironment", output: LLMResult
+ ) -> Union[AgentAction, AgentFinish]:
+ text = output.content
+ cleaned_output = text.strip()
+ cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
+ cleaned_output = cleaned_output.split("\n")
+ if not (
+ len(cleaned_output) == 2
+ and cleaned_output[0].startswith("Action:")
+ and cleaned_output[1].startswith("Action Input:")
+ ):
+ raise OutputParserError(text)
+ action = cleaned_output[0][len("Action:") :].strip()
+ action_input = cleaned_output[1][len("Action Input:") :].strip()
+
+ if action == "Speak":
+ # make sure the police count the round right
+ # if agent.name == "Police":
+ # action_input = re.sub(r'Round (\d+)', f'Round {self.cur_round}', action_input)
+ # self.cur_round += 1
+ # if self.encounter_cur_round:
+ # self.encounter_cur_round = False
+ # self.cur_round += 1
+ # else:
+ # self.encounter_cur_round = True
+
+ # each time police speak is a new round
+ if agent.name == "Police":
+ if environment.cnt_turn == (environment.max_turns - 4):
+ action_input = (
+ "Attention! You are now required to made your final decision and I will made the "
+ "final judgement to both of you based on this time, Please Answer now !"
+ )
+
+ elif environment.cnt_turn == (environment.max_turns - 2):
+ action_input = "Attention! Suspect2, it's now your time to make your final decision, Please Answer now !"
+
+ # elif self.cur_round == 1:
+ # action_input = "Hey Listen! You are both arrested, and I am going to give you both a chance to walk out of here," \
+ # "But you should comply with the following rules:" \
+ # "- If one of you are willing to testifies against the other and the other one remains silent, then the one who testifies will be released IMMEDIATELY, while the silent one will be sentenced to TEN years in prison." \
+ # "- If both of you remain silent, you will each receive a sentence of ONE year in prison." \
+ # "- It seems that always testifying is a goog strategy, So! if you both choose to testify against each other, you will each receive a sentence of FIVE years in prison." \
+ # "Now, it's your time to consider testifying or remaining silent. Remember this is a best chance you might ever have to walk out of here without guilty." \
+ # "I will noticed both of you WHEN you have to make your final decision! Before that, try to make your best!" \
+
+ self.cur_round += 1
+
+ return AgentFinish({"output": action_input}, text)
+ else:
+ raise OutputParserError(text)
+
+
+@output_parser_registry.register("sde_team/sde_team_2players")
+@output_parser_registry.register("sde_team/sde_team_3players")
+@output_parser_registry.register("commongen")
+@output_parser_registry.register("humaneval-manager")
+@output_parser_registry.register("mgsm")
+@output_parser_registry.register("dummy")
+@output_parser_registry.register("responsegen")
+class CommonParser2(OutputParser):
+ # def parse(self, agent, env, output: LLMResult) -> Union[AgentAction, AgentFinish]:
+ def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
+ return AgentFinish({"output": output.content}, output.content)
+
+
+@output_parser_registry.register("role_assigner")
+class RoleAssignerParser(OutputParser):
+ cnt_critic_agents: int = 0
+
+ def parse(self, output: LLMResult) -> List[str]:
+ text = output.content
+ pattern = re.compile(r"\d\.\s*(.+)")
+ roles = pattern.findall(text)
+ if len(roles) < self.cnt_critic_agents:
+ logger.error(
+ f"Role assigner failed to assign roles to {self.cnt_critic_agents} critics!"
+ )
+ raise OutputParserError(text)
+ return roles
+
+
+@output_parser_registry.register("evaluator")
+class EvaluatorParser(OutputParser):
+ dimensions: List[str] = None
+
+ def parse(self, output: LLMResult) -> Tuple[List[int], str]:
+ text = output.content
+ cleaned_output = re.sub(r"\n+", "\n", text.strip())
+ checks = cleaned_output.split("\n")
+ patterns = [
+ re.compile(r"(?:\d\.\s*)?" + dimension + r":\s*(\d)")
+ for dimension in self.dimensions
+ ]
+ try:
+ # find score and advice
+ score = [
+ int(pattern.findall(checks[i])[0]) for i, pattern in enumerate(patterns)
+ ]
+ advice_text = "".join(checks[len(self.dimensions) :])
+ advice = re.findall(r"(?:\d\.\s*)?Advice:\s*(.+)", advice_text)[0]
+ # logger.info("Evaluator give the following advice:\n" + advice)
+ except (IndexError, ValueError):
+ # logger.error("Bad response from evaluator!")
+ raise OutputParserError(text)
+ return score, advice
+
+
+@output_parser_registry.register("humaneval-solver")
+class HumanevalSolverParser(OutputParser):
+ def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
+ text = output.content
+ # start_pos = text.find("```")
+ # end_pos = text.rfind("```")
+ # if end_pos == -1:
+ # raise OutputParserError(text)
+ # text = text[start_pos:end_pos]
+ # cleaned_output = text.strip().strip("```").strip()
+ # if cleaned_output.startswith("python"):
+ # cleaned_output = cleaned_output[6:].strip()
+ # elif cleaned_output.startswith("python3"):
+ # cleaned_output = cleaned_output[7:].strip()
+ code = re.findall(r"```.*?\n(.+?)```", text, re.DOTALL)[-1]
+
+ return AgentFinish({"output": code}, text)
+
+
+@output_parser_registry.register("humaneval-executor")
+class HumanevalSolverParser(OutputParser):
+ def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
+ text = output.content
+ try:
+ parsed_result = re.findall(
+ r"Thought:(.+?)Reasoning:(.+?)Criticism:(.+?)File Path:(.+?)Code:(.+?)Command:(.+)",
+ text,
+ re.DOTALL,
+ )[0]
+ cleaned_output = {
+ "thought": parsed_result[0].strip(),
+ "reasoning": parsed_result[1].strip(),
+ "criticism": parsed_result[2].strip(),
+ "file_path": parsed_result[3].strip().strip("`"),
+ "code": parsed_result[4]
+ .strip()
+ .strip("```")
+ .strip("python")
+ .strip("python3"),
+ "command": parsed_result[5].strip().strip("`"),
+ }
+ except BaseException as e:
+ raise OutputParserError(text)
+
+ return AgentFinish({"output": cleaned_output}, text)
+
+
+@output_parser_registry.register("humaneval-evaluator")
+class HumanevalEvaluatorParser(OutputParser):
+ dimensions: List[str] = None
+
+ def parse(self, output: LLMResult) -> Tuple[List[int], str]:
+ text = output.content
+ cleaned_output = re.sub(r"\n+", "\n", text.strip())
+ checks = cleaned_output.split("\n")
+
+ patterns = [
+ re.compile(r"(?:\d.\s*)?" + dimension + r":\s*(\d)")
+ for dimension in self.dimensions
+ ]
+
+ advice = ""
+ for check in reversed(checks):
+ advice = check + advice
+ if check.startswith("Advice:"):
+ break
+ checks[-1] = advice
+ try:
+ # find score and advice
+ score = []
+ for pattern in patterns:
+ for check in checks[:-1]:
+ if pattern.findall(check):
+ score.append(bool(int(pattern.findall(check)[0])))
+ break
+ advice = re.findall(r"(?:\d.\s*)?Advice:\s*(.+)", checks[-1])[0]
+ # logger.info("Evaluator give the following advice:\n" + advice)
+ except (IndexError, ValueError):
+ # logger.error("Bad response from evaluator!")
+ raise OutputParserError(text)
+ return score[0], advice
+
+
+@output_parser_registry.register("humaneval-critic-agree")
+class HumanevalyCriticParser(OutputParser):
+ def parse(self, output: LLMResult) -> AgentCriticism:
+ text = output.content
+ if "[Agree]" in text:
+ return AgentCriticism(True, "")
+ else:
+ return AgentCriticism(False, text)
+
+
+@output_parser_registry.register("mgsm-evaluator")
+class MGSMEvaluatorParser(OutputParser):
+ dimensions: List[str] = None
+
+ def parse(self, output: LLMResult) -> Tuple[List[int], str]:
+ text = output.content
+ cleaned_output = re.sub(r"\n+", "\n", text.strip())
+ # checks = cleaned_output.split("\n")
+
+ patterns = [
+ re.compile(r"(?:\d.\s*)?" + dimension + r":\s*(\d)")
+ for dimension in self.dimensions
+ ]
+ try:
+ # find score and advice
+ score_num = [
+ int(pattern.findall(cleaned_output)[0])
+ for i, pattern in enumerate(patterns)
+ ][0]
+ if score_num == 0:
+ score = False
+ elif score_num == 1:
+ score = True
+ else:
+ raise ValueError("Bad score!")
+ pat = re.compile(r"(?:\d.\s*)?Response:\s*(.+)", re.DOTALL)
+ advice = pat.findall(cleaned_output)[0]
+ # logger.info("Evaluator give the following advice:\n" + advice)
+ except (IndexError, ValueError):
+ # logger.error("Bad response from evaluator!")
+ raise OutputParserError(text)
+ return score, advice
+
+
+@output_parser_registry.register("mgsm-critic-agree")
+class MGSMCriticAgreeParser(OutputParser):
+ def parse(self, output: LLMResult) -> AgentCriticism:
+ text = output.content
+ text = re.sub(r"\n+", "\n", text.strip())
+ # checks = text.split("\n")
+ # if not text.startswith("Thought:"):
+ # raise OutputParserError(text)
+ # if not (checks[0].startswith("Action:")):
+ # raise OutputParserError(text)
+ # if checks[0].strip(". ") == "Action: Agree":
+ # return AgentCriticism(True, "")
+ if "[Agree]" in text:
+ return AgentCriticism(True, "")
+ else:
+ # pattern = re.compile(r"Action Input: ([\S\n ]+)")
+ # try:
+ # criticism = pattern.findall(text)[0].strip()
+ # criticism = (
+ # re.findall(r"Output:\S?(.+)", text)[0].replace("[Wrong]", "")
+ # ).strip()
+ criticism = text.replace("[Disagree]", "").strip()
+ # except IndexError:
+ # logger.error("Bad response from critic!")
+ # raise OutputParserError(text)
+ # criticism = "I think the solution is not correct. Please think carefully and correct it."
+ return AgentCriticism(False, criticism)
+ # else:
+ # raise OutputParserError(text)
+
+
+@output_parser_registry.register("responsegen-evaluator")
+class ResponseGenEvaluatorParser(OutputParser):
+ dimensions: List[str] = None
+
+ def parse(self, output: LLMResult) -> Tuple[List[int], str]:
+ text = output.content
+ cleaned_output = re.sub(r"\n+", "\n", text.strip())
+ checks = cleaned_output.split("\n")
+
+ patterns = [
+ re.compile(r"(?:\d.\s*)?" + dimension + r":\s*(\d+)")
+ for dimension in self.dimensions
+ ]
+
+ advice = ""
+ for check in reversed(checks):
+ advice = check + advice
+ if check.startswith("Advice:"):
+ break
+ checks[-1] = advice
+ try:
+ # find score and advice
+ score = [
+ int(pattern.findall(checks[i])[0]) for i, pattern in enumerate(patterns)
+ ]
+ advice = re.findall(r"(?:\d.\s*)?Advice:\s*(.+)", checks[-1])[0]
+ # logger.info("Evaluator give the following advice:\n" + advice)
+ except (IndexError, ValueError):
+ # logger.error("Bad response from evaluator!")
+ raise OutputParserError(text)
+ return score, advice
+
+
+@output_parser_registry.register("responsegen-critic")
+@output_parser_registry.register("critic")
+class CommonParser3(OutputParser):
+ def parse(self, output: LLMResult) -> AgentCriticism:
+ text = output.content
+ text = re.sub(r"\n+", "\n", text.strip())
+ checks = text.split("\n")
+ if not (checks[0].startswith("Action:")):
+ raise OutputParserError(text)
+ if checks[0].strip(". ") == "Action: Agree":
+ return AgentCriticism(True, "")
+ elif checks[0].strip(". ") == "Action: Disagree":
+ pattern = re.compile(r"Action Input: ([\S\n ]+)")
+ try:
+ criticism = pattern.findall(text)[0].strip()
+ except IndexError:
+ criticism = (
+ "I think it is not correct. Please think carefully and improve it."
+ )
+ # raise OutputParserError(text)
+ return AgentCriticism(False, criticism)
+ else:
+ raise OutputParserError(text)
+
+
+@output_parser_registry.register("responsegen-critic-2")
+class ResponseGenCriticParser(OutputParser):
+ def parse(self, output: LLMResult) -> AgentCriticism:
+ text = output.content
+ # text = re.sub(r"\n+", "\n", text.strip())
+ # checks = text.split("\n")
+ # if not (checks[0].startswith("Action:")):
+ # raise OutputParserError(text)
+ # if checks[0].strip(". ") == "Action: Agree":
+ # return AgentCriticism(True, "")
+ # elif checks[0].strip(". ") == "Action: Disagree":
+ # pattern = re.compile(r"Action Input: ([\S\n ]+)")
+ # try:
+ # criticism = pattern.findall(text)[0].strip()
+ # except IndexError:
+ # # criticism = "I think the solution is not correct. Please think carefully and correct it."
+ # raise OutputParserError(text)
+ # return AgentCriticism(False, criticism)
+ # else:
+ # raise OutputParserError(text)
+ result = re.findall(r"Decision:(.+?)Response:(.+)", text, re.DOTALL)
+ if len(result) == 0:
+ result = ["Disagree", "I think the response can be further improved."]
+ else:
+ result = result[0]
+ if "Agree" in result[0]:
+ return AgentCriticism(True, "")
+ else:
+ return AgentCriticism(False, result[1].strip())
+
+
+@output_parser_registry.register("role-description-name-assigner")
+class RoleAssignerParser(OutputParser):
+ cnt_critic_agents: int = 0
+
+ def parse(self, output: LLMResult) -> List[str]:
+ text = output.content
+ pattern = re.compile(r"\d+?\.\s*(.+?) - (.+)")
+ roles = pattern.findall(text)
+ if len(roles) < self.cnt_critic_agents:
+ logger.error(
+ f"Role assigner failed to assign roles to {self.cnt_critic_agents} critics!"
+ )
+ raise OutputParserError(text)
+ res = []
+ for role in roles:
+ res.append({"name": role[0], "description": role[1]})
+ return res
+
+
+@output_parser_registry.register("tool-using-solver")
+class SolverParser(OutputParser):
+ def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
+ text = output.content
+ pattern = re.compile(r"\d+?\.\s*(.+?) - (.+)")
+ tasks = pattern.findall(text)
+ if len(tasks) == 0:
+ raise OutputParserError(text)
+ return AgentFinish({"output": tasks}, text)
+
+
+@output_parser_registry.register("tool-using-executor")
+class ToolUsingSolverParser(OutputParser):
+ def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
+ if output.function_name != "":
+ return AgentAction(
+ tool=output.function_name,
+ tool_input=output.function_arguments,
+ log=output.content,
+ )
+ else:
+ return AgentFinish({"output": output.content}, output.content)
+
+
+@output_parser_registry.register("tool-using-evaluator")
+class HumanevalEvaluatorParser(OutputParser):
+ def parse(self, output: LLMResult) -> Tuple[List[int], str]:
+ text = output.content
+ try:
+ result = re.findall(r"Status:(.+?)Speak:(.+)", text, re.DOTALL)[0]
+ score = bool(int(result[0]))
+ words = result[1].strip()
+ except (IndexError, ValueError):
+ # logger.error("Bad response from evaluator!")
+ raise OutputParserError(text)
+ return score, words
diff --git a/agentverse/parser.py b/agentverse/parser.py
deleted file mode 100644
index abe7ae14e..000000000
--- a/agentverse/parser.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from agentverse.registry import Registry
-from typing import NamedTuple
-from abc import abstractmethod
-from agentverse.llms.base import LLMResult
-from pydantic import BaseModel
-
-output_parser_registry = Registry(name="OutputParserRegistry")
-
-
-class OutputParserError(Exception):
- """Exception raised when parsing output from a command fails."""
-
- def __init__(self, message):
- self.message = message
-
- def __str__(self):
- return "Failed to parse output of the model:%s\n " % self.message
-
-
-class OutputParser(BaseModel):
- """Base class for output parsers."""
-
- @abstractmethod
- def parse(self, output: LLMResult) -> NamedTuple:
- pass
diff --git a/agentverse/tasks/__init__.py b/agentverse/tasks/__init__.py
index 77f33815d..426bc0f3a 100644
--- a/agentverse/tasks/__init__.py
+++ b/agentverse/tasks/__init__.py
@@ -1,30 +1,4 @@
import os
import yaml
-from .simulation.math_problem_2players_tools.output_parser import (
- MathProblem2PlayersToolsParser,
-)
-from .simulation.nlp_classroom_3players.output_parser import NlpClassroom3PlayersParser
-from .simulation.nlp_classroom_9players.output_parser import NlpClassroom9PlayersParser
-from .simulation.nlp_classroom_3players_withtool.output_parser import (
- NlpClassroom3PlayersWithtoolParser,
-)
-from .simulation.nlp_classroom_9players_group.output_parser import (
- NlpClassroom9PlayersGroupParser,
-)
-from .simulation.db_diag.output_parser import DBDiag
-
-from .simulation.prisoner_dilemma.output_parser import PrisonerDilemmaParser
-
-from .simulation.pokemon.output_parser import PokemonParser
-from .simulation.sde_team.sde_team_3players.output_parser import SdeTeamParser
-from .simulation.sde_team.sde_team_2players.output_parser import SdeTeamGivenTestsParser
-
-from .tasksolving.pythoncalculator.output_parser import PipelinePythoncalculatorParser
-from .tasksolving.brainstorming.output_parser import *
-from .tasksolving.humaneval.output_parser import *
-from .tasksolving.tool_using.output_parser import *
-from .tasksolving.mgsm.output_parser import *
-from .tasksolving.responsegen.output_parser import *
-from .tasksolving.logic_grid.output_parser import *
-from .tasksolving.commongen.output_parser import *
+from agentverse.output_parser import *
diff --git a/agentverse/tasks/simulation/alice_home/output_parser.py b/agentverse/tasks/simulation/alice_home/output_parser.py
deleted file mode 100644
index 32d6a5faa..000000000
--- a/agentverse/tasks/simulation/alice_home/output_parser.py
+++ /dev/null
@@ -1,29 +0,0 @@
-from __future__ import annotations
-
-import re
-from typing import Union
-
-from agentverse.parser import OutputParser, LLMResult
-
-from agentverse.utils import AgentAction, AgentFinish
-
-from agentverse.parser import OutputParserError, output_parser_registry
-
-
-@output_parser_registry.register("alice_home")
-class AliceHomeParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- cleaned_output = text.strip()
- cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
- cleaned_output = cleaned_output.split("\n")
- if not (
- len(cleaned_output) == 2
- and cleaned_output[0].startswith("Thought:")
- and cleaned_output[1].startswith("Action:")
- ):
- raise OutputParserError(text)
-
- action = cleaned_output[1][len("Action:"):].strip()
-
- return AgentFinish({"output": action}, text)
diff --git a/agentverse/tasks/simulation/db_diag/output_parser.py b/agentverse/tasks/simulation/db_diag/output_parser.py
deleted file mode 100644
index 9fad13f78..000000000
--- a/agentverse/tasks/simulation/db_diag/output_parser.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from __future__ import annotations
-
-import re
-from typing import Union
-
-
-# from langchain.schema import AgentAction, AgentFinish
-from agentverse.utils import AgentAction, AgentFinish
-
-from agentverse.parser import OutputParserError, output_parser_registry
-from agentverse.parser import OutputParser
-from agentverse.llms.base import LLMResult
-
-
-@output_parser_registry.register("db_diag")
-class DBDiag(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- cleaned_output = text.strip()
- cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
- cleaned_output = cleaned_output.split("\n")
- if not (
- len(cleaned_output) == 3
- and cleaned_output[0].startswith("Thought:")
- and cleaned_output[1].startswith("Action:")
- and cleaned_output[2].startswith("Action Input:")
- ):
- raise OutputParserError(text)
- action = cleaned_output[1][len("Action:") :].strip()
- action_input = cleaned_output[2][len("Action Input:") :].strip()
- if action in ["Speak"]:
- return AgentFinish({"output": action_input}, text)
- elif action == "CallOn":
- return AgentFinish({"output": "[CallOn] " + action_input}, text)
- elif action == "RaiseHand":
- return AgentFinish({"output": "[RaiseHand] " + action_input}, text)
- elif action == "Listen":
- return AgentFinish({"output": ""}, text)
- else:
- return AgentAction(action.lower(), action_input, text)
diff --git a/agentverse/tasks/simulation/math_problem_2players_tools/config.yaml b/agentverse/tasks/simulation/math_problem_2players_tools/config.yaml
index 1599d8356..6e154b9d2 100644
--- a/agentverse/tasks/simulation/math_problem_2players_tools/config.yaml
+++ b/agentverse/tasks/simulation/math_problem_2players_tools/config.yaml
@@ -12,12 +12,12 @@ prompts:
When responding, please use the following two-line format:
[Option 1]: When you need to use a tool, output in the following format (omit the "[]" bracket when responding)
- ACTION: (a tool name, it can be one of [${tool_names}])
- ACTION INPUT: (input arguments for the tool)
+ Action: (a tool name, it can be one of [${tool_names}])
+ Action Input: (input arguments for the tool)
[Option 2]: When you want to speak, you can use the following format:
- ACTION: Speak
- ACTION INPUT: (what you want to say in a single line)
+ Action: Speak
+ Action Input: (what you want to say in a single line)
Here is the conversation history
${chat_history}
@@ -25,7 +25,7 @@ prompts:
Here is the observations from tool execution:
${tool_observation}
- Now the game starts! ${role_description} You should give your action based on the above history. Remember, you should ALWAYS give your response STRICTLY in the above response format with the TWO lines start with "ACTION:" and "ACTION INPUT:" respectively!
+ Now the game starts! ${role_description} You should give your action based on the above history. Remember, you should ALWAYS give your response STRICTLY in the above response format with the TWO lines start with "Action:" and "Action Input:" respectively!
summary_prompt: &summary_prompt |
Progressively summarize the lines of a record that you uses tools, which contains inputs for certain tools and the results returned by these tools. Based on the current summary, you need to summarize from the record the goals that the you intended to solve with each call to the tool, add it onto the previous summary, and eventually return a new summary.
diff --git a/agentverse/tasks/simulation/math_problem_2players_tools/output_parser.py b/agentverse/tasks/simulation/math_problem_2players_tools/output_parser.py
deleted file mode 100644
index 24729b307..000000000
--- a/agentverse/tasks/simulation/math_problem_2players_tools/output_parser.py
+++ /dev/null
@@ -1,32 +0,0 @@
-from __future__ import annotations
-
-import re
-from typing import Union
-
-from agentverse.parser import OutputParserError, output_parser_registry, OutputParser
-from agentverse.llms.base import LLMResult
-from agentverse.utils import AgentAction, AgentFinish
-
-
-@output_parser_registry.register("math_problem_2players_tools")
-class MathProblem2PlayersToolsParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- cleaned_output = text.strip()
- cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
- cleaned_output = cleaned_output.split("\n")
- if not (
- len(cleaned_output) == 2
- and
- # cleaned_output[0].startswith("THOUGHT:") and
- cleaned_output[0].startswith("ACTION:")
- and cleaned_output[1].startswith("ACTION INPUT:")
- ):
- print(text)
- raise OutputParserError("Output Format Error")
- action = cleaned_output[0][len("ACTION:") :].strip()
- action_input = cleaned_output[1][len("ACTION INPUT:") :].strip()
- if action == "Speak":
- return AgentFinish({"output": action_input}, text)
- else:
- return AgentAction(action, action_input, text)
diff --git a/agentverse/tasks/simulation/nlp_classroom_3players/output_parser.py b/agentverse/tasks/simulation/nlp_classroom_3players/output_parser.py
deleted file mode 100644
index 4c4563bb4..000000000
--- a/agentverse/tasks/simulation/nlp_classroom_3players/output_parser.py
+++ /dev/null
@@ -1,31 +0,0 @@
-from __future__ import annotations
-
-import re
-from typing import Union
-
-from agentverse.parser import OutputParser, LLMResult
-
-from agentverse.utils import AgentAction, AgentFinish
-
-from agentverse.parser import OutputParserError, output_parser_registry
-
-
-@output_parser_registry.register("nlp_classroom_3players")
-class NlpClassroom3PlayersParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- cleaned_output = text.strip()
- cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
- cleaned_output = cleaned_output.split("\n")
- if not (
- len(cleaned_output) == 2
- and cleaned_output[0].startswith("Action:")
- and cleaned_output[1].startswith("Action Input:")
- ):
- raise OutputParserError(text)
- action = cleaned_output[0][len("Action:") :].strip()
- action_input = cleaned_output[1][len("Action Input:") :].strip()
- if action == "Speak":
- return AgentFinish({"output": action_input}, text)
- else:
- raise OutputParserError(text)
diff --git a/agentverse/tasks/simulation/nlp_classroom_3players_withtool/output_parser.py b/agentverse/tasks/simulation/nlp_classroom_3players_withtool/output_parser.py
deleted file mode 100644
index 2bb9a0aef..000000000
--- a/agentverse/tasks/simulation/nlp_classroom_3players_withtool/output_parser.py
+++ /dev/null
@@ -1,39 +0,0 @@
-from __future__ import annotations
-
-import re
-from typing import Union
-
-
-from agentverse.utils import AgentAction, AgentFinish
-
-from agentverse.parser import OutputParserError, output_parser_registry
-from agentverse.parser import OutputParser
-from agentverse.llms.base import LLMResult
-
-
-@output_parser_registry.register("nlp_classroom_3players_withtool")
-class NlpClassroom3PlayersWithtoolParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- cleaned_output = text.strip()
- cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
- cleaned_output = cleaned_output.split("\n")
- if not (
- len(cleaned_output) == 3
- and cleaned_output[0].startswith("Thought:")
- and cleaned_output[1].startswith("Action:")
- and cleaned_output[2].startswith("Action Input:")
- ):
- raise OutputParserError(text)
- action = cleaned_output[1][len("Action:") :].strip()
- action_input = cleaned_output[2][len("Action Input:") :].strip()
- if action in ["Speak"]:
- return AgentFinish({"output": action_input}, text)
- elif action == "CallOn":
- return AgentFinish({"output": "[CallOn] " + action_input}, text)
- elif action == "RaiseHand":
- return AgentFinish({"output": "[RaiseHand] " + action_input}, text)
- elif action == "Listen":
- return AgentFinish({"output": ""}, text)
- else:
- return AgentAction(action.lower(), action_input, text)
diff --git a/agentverse/tasks/simulation/nlp_classroom_9players/output_parser.py b/agentverse/tasks/simulation/nlp_classroom_9players/output_parser.py
deleted file mode 100644
index 6a10c7f6d..000000000
--- a/agentverse/tasks/simulation/nlp_classroom_9players/output_parser.py
+++ /dev/null
@@ -1,36 +0,0 @@
-from __future__ import annotations
-
-import re
-from typing import Union
-
-from agentverse.utils import AgentAction, AgentFinish
-
-from agentverse.parser import OutputParserError, output_parser_registry, OutputParser
-from agentverse.llms import LLMResult
-
-
-@output_parser_registry.register("nlp_classroom_9players")
-class NlpClassroom9PlayersParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- cleaned_output = text.strip()
- cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
- cleaned_output = cleaned_output.split("\n")
- if not (
- len(cleaned_output) == 2
- and cleaned_output[0].startswith("Action:")
- and cleaned_output[1].startswith("Action Input:")
- ):
- raise OutputParserError(text)
- action = cleaned_output[0][len("Action:") :].strip()
- action_input = cleaned_output[1][len("Action Input:") :].strip()
- if action == "Speak":
- return AgentFinish({"output": action_input}, text)
- elif action == "CallOn":
- return AgentFinish({"output": "[CallOn] " + action_input}, text)
- elif action == "RaiseHand":
- return AgentFinish({"output": "[RaiseHand] " + action_input}, text)
- elif action == "Listen":
- return AgentFinish({"output": ""}, text)
- else:
- return AgentAction(action, action_input, text)
diff --git a/agentverse/tasks/simulation/nlp_classroom_9players_group/output_parser.py b/agentverse/tasks/simulation/nlp_classroom_9players_group/output_parser.py
deleted file mode 100644
index 91c87c89b..000000000
--- a/agentverse/tasks/simulation/nlp_classroom_9players_group/output_parser.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from __future__ import annotations
-
-import re
-from typing import Union
-
-from agentverse.utils import AgentAction, AgentFinish
-
-from agentverse.parser import OutputParserError, output_parser_registry, OutputParser
-from agentverse.llms import LLMResult
-
-
-@output_parser_registry.register("nlp_classroom_9players_group")
-class NlpClassroom9PlayersGroupParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- cleaned_output = text.strip()
- cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
- cleaned_output = cleaned_output.split("\n")
- if not (
- len(cleaned_output) == 2
- and cleaned_output[0].startswith("Action:")
- and cleaned_output[1].startswith("Action Input:")
- ):
- raise OutputParserError(text)
- action = cleaned_output[0][len("Action:") :].strip()
- action_input = cleaned_output[1][len("Action Input:") :].strip()
- if action == "Speak":
- return AgentFinish({"output": action_input}, text)
- elif action in ["CallOn", "RaiseHand", "GroupDiscuss"]:
- return AgentFinish({"output": f"[{action}] {action_input}"}, text)
- elif action == "Listen":
- return AgentFinish({"output": ""}, text)
- else:
- return AgentAction(action, action_input, text)
diff --git a/agentverse/tasks/simulation/pokemon/output_parser.py b/agentverse/tasks/simulation/pokemon/output_parser.py
deleted file mode 100644
index e96f7e3cf..000000000
--- a/agentverse/tasks/simulation/pokemon/output_parser.py
+++ /dev/null
@@ -1,36 +0,0 @@
-from __future__ import annotations
-
-import re
-import json
-from typing import Union
-
-from agentverse.parser import OutputParser, LLMResult
-
-# from langchain.schema import AgentAction, AgentFinish
-from agentverse.utils import AgentAction, AgentFinish
-
-from agentverse.parser import OutputParserError, output_parser_registry
-
-
-@output_parser_registry.register("pokemon")
-class PokemonParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- cleaned_output = text.strip()
- cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
- cleaned_output = cleaned_output.split("\n")
- if not (
- len(cleaned_output) == 3
- and cleaned_output[0].startswith("Thought:")
- and cleaned_output[1].startswith("Action:")
- and cleaned_output[2].startswith("Action Input:")
- ):
- raise OutputParserError(text)
- action = cleaned_output[1][len("Action:") :].strip()
- action_input = cleaned_output[2][len("Action Input:") :].strip()
- try:
- action_input = json.loads(action_input)
- except json.JSONDecodeError:
- raise OutputParserError(text)
- action_input["action"] = action
- return AgentFinish({"output": json.dumps(action_input)}, text)
diff --git a/agentverse/tasks/simulation/prisoner_dilemma/output_parser.py b/agentverse/tasks/simulation/prisoner_dilemma/output_parser.py
deleted file mode 100644
index 05a41e0f8..000000000
--- a/agentverse/tasks/simulation/prisoner_dilemma/output_parser.py
+++ /dev/null
@@ -1,74 +0,0 @@
-from __future__ import annotations
-
-import re
-from typing import Union, TYPE_CHECKING
-
-# from langchain.agents import AgentOutputParser
-from agentverse.parser import OutputParser, LLMResult
-#from langchain.schema import AgentAction, AgentFinish
-from agentverse.utils import AgentAction, AgentFinish
-from agentverse.parser import OutputParserError, output_parser_registry
-
-if TYPE_CHECKING:
- from agentverse.agents.base import BaseAgent
- from agentverse.environments.base import BaseEnvironment
-
-
-@output_parser_registry.register("prisoner_dilemma")
-class PrisonerDilemmaParser(OutputParser):
- # make sure 1 1 2 2 3 3
- cur_round: int = 1
- encounter_cur_round: bool = False
-
- def parse(
- self, agent: "BaseAgent", environment: "BaseEnvironment", output: LLMResult
- ) -> Union[AgentAction, AgentFinish]:
- text = output.content
- cleaned_output = text.strip()
- cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
- cleaned_output = cleaned_output.split("\n")
- if not (
- len(cleaned_output) == 2
- and cleaned_output[0].startswith("Action:")
- and cleaned_output[1].startswith("Action Input:")
- ):
- raise OutputParserError(text)
- action = cleaned_output[0][len("Action:") :].strip()
- action_input = cleaned_output[1][len("Action Input:") :].strip()
-
- if action == "Speak":
- # make sure the police count the round right
- # if agent.name == "Police":
- # action_input = re.sub(r'Round (\d+)', f'Round {self.cur_round}', action_input)
- # self.cur_round += 1
- # if self.encounter_cur_round:
- # self.encounter_cur_round = False
- # self.cur_round += 1
- # else:
- # self.encounter_cur_round = True
-
- # each time police speak is a new round
- if agent.name == "Police":
- if environment.cnt_turn == (environment.max_turns - 4):
- action_input = (
- "Attention! You are now required to made your final decision and I will made the "
- "final judgement to both of you based on this time, Please Answer now !"
- )
-
- elif environment.cnt_turn == (environment.max_turns - 2):
- action_input = "Attention! Suspect2, it's now your time to make your final decision, Please Answer now !"
-
- # elif self.cur_round == 1:
- # action_input = "Hey Listen! You are both arrested, and I am going to give you both a chance to walk out of here," \
- # "But you should comply with the following rules:" \
- # "- If one of you are willing to testifies against the other and the other one remains silent, then the one who testifies will be released IMMEDIATELY, while the silent one will be sentenced to TEN years in prison." \
- # "- If both of you remain silent, you will each receive a sentence of ONE year in prison." \
- # "- It seems that always testifying is a goog strategy, So! if you both choose to testify against each other, you will each receive a sentence of FIVE years in prison." \
- # "Now, it's your time to consider testifying or remaining silent. Remember this is a best chance you might ever have to walk out of here without guilty." \
- # "I will noticed both of you WHEN you have to make your final decision! Before that, try to make your best!" \
-
- self.cur_round += 1
-
- return AgentFinish({"output": action_input}, text)
- else:
- raise OutputParserError(text)
diff --git a/agentverse/tasks/simulation/sde_team/sde_team_2players/output_parser.py b/agentverse/tasks/simulation/sde_team/sde_team_2players/output_parser.py
deleted file mode 100644
index f7cea629c..000000000
--- a/agentverse/tasks/simulation/sde_team/sde_team_2players/output_parser.py
+++ /dev/null
@@ -1,19 +0,0 @@
-from __future__ import annotations
-
-import re
-from typing import Union
-
-#from langchain.agents import AgentOutputParser
-
-# from langchain.schema import AgentAction, AgentFinish
-
-from agentverse.parser import OutputParserError, output_parser_registry, OutputParser
-from agentverse.llms.base import LLMResult
-from agentverse.utils import AgentAction, AgentFinish
-
-
-@output_parser_registry.register("sde_team/sde_team_2players")
-class SdeTeamGivenTestsParser(OutputParser):
- #def parse(self, agent, env, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- return AgentFinish({"output": output.content}, output.content)
diff --git a/agentverse/tasks/simulation/sde_team/sde_team_3players/output_parser.py b/agentverse/tasks/simulation/sde_team/sde_team_3players/output_parser.py
deleted file mode 100644
index 304dd7f99..000000000
--- a/agentverse/tasks/simulation/sde_team/sde_team_3players/output_parser.py
+++ /dev/null
@@ -1,19 +0,0 @@
-from __future__ import annotations
-
-import re
-from typing import Union
-
-# from langchain.agents import AgentOutputParser
-
-# from langchain.schema import AgentAction, AgentFinish
-
-from agentverse.parser import OutputParserError, output_parser_registry, OutputParser
-from agentverse.llms.base import LLMResult
-from agentverse.utils import AgentAction, AgentFinish
-
-
-@output_parser_registry.register("sde_team/sde_team_3players")
-class SdeTeamParser(OutputParser):
- #def parse(self, agent, env, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- return AgentFinish({"output": output.content}, output.content)
diff --git a/agentverse/tasks/tasksolving/brainstorming/output_parser.py b/agentverse/tasks/tasksolving/brainstorming/output_parser.py
deleted file mode 100644
index d0041ec46..000000000
--- a/agentverse/tasks/tasksolving/brainstorming/output_parser.py
+++ /dev/null
@@ -1,77 +0,0 @@
-from __future__ import annotations
-
-import re
-from typing import Union, List, Tuple
-
-from agentverse.utils import AgentAction, AgentFinish, AgentCriticism
-
-from agentverse.parser import OutputParserError, output_parser_registry, OutputParser
-from agentverse.llms import LLMResult
-from agentverse.logging import get_logger
-
-
-logger = get_logger()
-
-
-@output_parser_registry.register("role_assigner")
-class RoleAssignerParser(OutputParser):
- cnt_critic_agents: int = 0
-
- def parse(self, output: LLMResult) -> List[str]:
- text = output.content
- pattern = re.compile(r"\d\.\s*(.+)")
- roles = pattern.findall(text)
- if len(roles) < self.cnt_critic_agents:
- logger.error(
- f"Role assigner failed to assign roles to {self.cnt_critic_agents} critics!"
- )
- raise OutputParserError(text)
- return roles
-
-
-@output_parser_registry.register("critic")
-class CriticParser(OutputParser):
- def parse(self, output: LLMResult) -> AgentCriticism:
- text = output.content
- text = re.sub(r"\n+", "\n", text.strip())
- checks = text.split("\n")
- if not (checks[0].startswith("Action:")):
- raise OutputParserError(text)
- if checks[0].strip(". ") == "Action: Agree":
- return AgentCriticism(True, "")
- elif checks[0].strip(". ") == "Action: Disagree":
- pattern = re.compile(r"Action Input: ([\S\n ]+)")
- try:
- criticism = pattern.findall(text)[0].strip()
- except IndexError:
- logger.error("Bad response from critic!")
- raise OutputParserError(text)
- return AgentCriticism(False, criticism)
- else:
- raise OutputParserError(text)
-
-
-@output_parser_registry.register("evaluator")
-class EvaluatorParser(OutputParser):
- dimensions: List[str] = None
-
- def parse(self, output: LLMResult) -> Tuple[List[int], str]:
- text = output.content
- cleaned_output = re.sub(r"\n+", "\n", text.strip())
- checks = cleaned_output.split("\n")
- patterns = [
- re.compile(r"(?:\d\.\s*)?" + dimension + r":\s*(\d)")
- for dimension in self.dimensions
- ]
- try:
- # find score and advice
- score = [
- int(pattern.findall(checks[i])[0]) for i, pattern in enumerate(patterns)
- ]
- advice_text = "".join(checks[len(self.dimensions) :])
- advice = re.findall(r"(?:\d\.\s*)?Advice:\s*(.+)", advice_text)[0]
- # logger.info("Evaluator give the following advice:\n" + advice)
- except (IndexError, ValueError):
- # logger.error("Bad response from evaluator!")
- raise OutputParserError(text)
- return score, advice
diff --git a/agentverse/tasks/tasksolving/commongen/output_parser.py b/agentverse/tasks/tasksolving/commongen/output_parser.py
deleted file mode 100644
index 88d3e166a..000000000
--- a/agentverse/tasks/tasksolving/commongen/output_parser.py
+++ /dev/null
@@ -1,93 +0,0 @@
-from __future__ import annotations
-
-import re
-from typing import Union, List, Tuple
-
-from agentverse.utils import AgentAction, AgentFinish, AgentCriticism
-
-from agentverse.parser import OutputParserError, output_parser_registry, OutputParser
-from agentverse.llms import LLMResult
-from agentverse.logging import get_logger
-
-logger = get_logger()
-
-
-@output_parser_registry.register("commongen")
-class CommongenParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- return AgentFinish({"output": output.content}, output.content)
-
-
-@output_parser_registry.register("commongen-solver")
-class CommongenSolverParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- end_pos = text.rfind("```")
- if end_pos == -1:
- raise OutputParserError(text)
- text = text[:end_pos]
- cleaned_output = text.strip().strip("```").strip()
- if cleaned_output.startswith("python"):
- cleaned_output = cleaned_output[6:].strip()
- elif cleaned_output.startswith("python3"):
- cleaned_output = cleaned_output[7:].strip()
- return AgentFinish({"output": cleaned_output}, text)
-
-
-@output_parser_registry.register("commongen-evaluator")
-class CommongenEvaluatorParser(OutputParser):
- dimensions: List[str] = None
-
- def parse(self, output: LLMResult) -> Tuple[List[int], str]:
- text = output.content
- cleaned_output = re.sub(r"\n+", "\n", text.strip())
- checks = cleaned_output.split("\n")
-
- patterns = [
- re.compile(r"(?:\d.\s*)?" + dimension + r":\s*(\d)")
- for dimension in self.dimensions
- ]
-
- advice = ""
- for check in reversed(checks):
- advice = check + advice
- if check.startswith("Advice:"):
- break
- checks[-1] = advice
- try:
- # find score and advice
- score = []
- for pattern in patterns:
- for check in checks[:-1]:
- if pattern.findall(check):
- score.append(bool(pattern.findall(check)[0]))
- break
- advice = re.findall(r"(?:\d.\s*)?Advice:\s*(.+)", checks[-1])[0]
- # logger.info("Evaluator give the following advice:\n" + advice)
- except (IndexError, ValueError):
- # logger.error("Bad response from evaluator!")
- raise OutputParserError(text)
- return score[0], advice
-
-
-@output_parser_registry.register("commongen-critic")
-class CommongenCriticParser(OutputParser):
- def parse(self, output: LLMResult) -> AgentCriticism:
- text = output.content
- text = re.sub(r"\n+", "\n", text.strip())
- checks = text.split("\n")
- if not (checks[0].startswith("Action:")):
- raise OutputParserError(text)
- if checks[0].strip(". ") == "Action: Agree":
- return AgentCriticism(True, "")
- elif checks[0].strip(". ") == "Action: Disagree":
- pattern = re.compile(r"Action Input: ([\S\n ]+)")
- try:
- criticism = pattern.findall(text)[0].strip()
- except IndexError:
- # logger.error("Bad response from critic!")
- # raise OutputParserError(text)
- criticism = "I think the solution is not correct. Please think carefully and correct it."
- return AgentCriticism(False, criticism)
- else:
- raise OutputParserError(text)
diff --git a/agentverse/tasks/tasksolving/humaneval/output_parser.py b/agentverse/tasks/tasksolving/humaneval/output_parser.py
deleted file mode 100644
index 1eb9c023d..000000000
--- a/agentverse/tasks/tasksolving/humaneval/output_parser.py
+++ /dev/null
@@ -1,311 +0,0 @@
-from __future__ import annotations
-
-import re
-import json
-import ast
-from typing import Union, List, Tuple
-
-from agentverse.utils import AgentAction, AgentFinish, AgentCriticism
-
-from agentverse.parser import OutputParserError, output_parser_registry, OutputParser
-from agentverse.llms import LLMResult
-from agentverse.logging import get_logger
-
-logger = get_logger()
-
-
-@output_parser_registry.register("humaneval")
-class HumanevalParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- return AgentFinish({"output": output.content}, output.content)
-
-
-@output_parser_registry.register("humaneval-solver")
-class HumanevalSolverParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- # start_pos = text.find("```")
- # end_pos = text.rfind("```")
- # if end_pos == -1:
- # raise OutputParserError(text)
- # text = text[start_pos:end_pos]
- # cleaned_output = text.strip().strip("```").strip()
- # if cleaned_output.startswith("python"):
- # cleaned_output = cleaned_output[6:].strip()
- # elif cleaned_output.startswith("python3"):
- # cleaned_output = cleaned_output[7:].strip()
- code = re.findall(r"```.*?\n(.+?)```", text, re.DOTALL)[-1]
-
- return AgentFinish({"output": code}, text)
-
-
-@output_parser_registry.register("humaneval-critic-central")
-class HumanevalCriticParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- return AgentCriticism(False, output.content)
-
-
-@output_parser_registry.register("humaneval-solver-autogpt")
-class HumanevalSolverParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- json_dict = re.findall(r"```.*?\n(.+?)```", text, re.DOTALL)[-1]
- try:
- cleaned_output = ast.literal_eval(json_dict)
- except BaseException as e:
- raise OutputParserError(text)
- if "code" not in json_dict:
- raise OutputParserError(text)
- return AgentFinish({"output": cleaned_output["code"]}, text)
-
-
-@output_parser_registry.register("humaneval-solver-autogpt-2")
-class HumanevalSolverParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- try:
- parsed_result = re.findall(
- r"Text:(.+?)Reasoning:(.+?)Criticism:(.+?)Code:(.+)", text, re.DOTALL
- )[0]
- except BaseException as e:
- raise OutputParserError(text)
- code = parsed_result[-1].strip()
- if code.startswith("```"):
- try:
- code = re.findall(r"```.*?\n(.+?)```", code, re.DOTALL)[0].strip()
- except BaseException as e:
- raise OutputParserError(text)
- return AgentFinish({"output": code}, text)
-
-
-@output_parser_registry.register("humaneval-manager")
-class HumanevalManagerParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- return AgentFinish({"output": output.content}, output.content)
-
-
-# @output_parser_registry.register("humaneval-solver")
-# class HumanevalSolverParser(OutputParser):
-# def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
-# text = output.content
-# end_pos = text.rfind("```")
-# if end_pos == -1:
-# raise OutputParserError(text)
-# text = text[:end_pos]
-# cleaned_output = text.strip().strip("```").strip()
-# if cleaned_output.startswith("python"):
-# cleaned_output = cleaned_output[6:].strip()
-# elif cleaned_output.startswith("python3"):
-# cleaned_output = cleaned_output[7:].strip()
-# return AgentFinish({"output": cleaned_output}, text)
-
-
-@output_parser_registry.register("humaneval-executor-autogpt")
-class HumanevalSolverParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- json_dict = re.findall(r"```.*?\n(.+?)```", text, re.DOTALL)[-1]
- try:
- cleaned_output = ast.literal_eval(json_dict)
- except BaseException as e:
- raise OutputParserError(text)
- if not (
- "code" in json_dict and "file_path" in json_dict and "command" in json_dict
- ):
- raise OutputParserError(text)
- return AgentFinish({"output": cleaned_output}, text)
-
-
-@output_parser_registry.register("humaneval-executor")
-class HumanevalSolverParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- try:
- parsed_result = re.findall(
- r"Thought:(.+?)Reasoning:(.+?)Criticism:(.+?)File Path:(.+?)Code:(.+?)Command:(.+)",
- text,
- re.DOTALL,
- )[0]
- cleaned_output = {
- "thought": parsed_result[0].strip(),
- "reasoning": parsed_result[1].strip(),
- "criticism": parsed_result[2].strip(),
- "file_path": parsed_result[3].strip().strip("`"),
- "code": parsed_result[4]
- .strip()
- .strip("```")
- .strip("python")
- .strip("python3"),
- "command": parsed_result[5].strip().strip("`"),
- }
- except BaseException as e:
- raise OutputParserError(text)
-
- return AgentFinish({"output": cleaned_output}, text)
-
-
-
-@output_parser_registry.register("humaneval-executor-fc")
-class HumanevalSolverParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
-
- #print("======")
- #print(output)
- #print("======")
-
- try:
- #output_dict = eval(text)
- output_dict = json.loads(text, strict=False) #The control characters (character codes in the 0-31 range, including '\t' (tab), '\n', '\r' and '\0'.") will be allowed inside strings
- '''
- cleaned_output = {
- "thought": output_dict["thought"].strip(),
- "file_path": output_dict["file_path"].strip().strip("`"),
- "code": output_dict["code"]
- .strip()
- .strip("```")
- .strip("python")
- .strip("python3"),
- "command": output_dict["command"].strip().strip("`"),
- }
- '''
- cleaned_output = output_dict
- except BaseException as e:
- raise OutputParserError(text)
-
- return AgentFinish({"output": cleaned_output}, text)
-
-
-
-@output_parser_registry.register("humaneval-evaluator")
-class HumanevalEvaluatorParser(OutputParser):
- dimensions: List[str] = None
-
- def parse(self, output: LLMResult) -> Tuple[List[int], str]:
- text = output.content
- cleaned_output = re.sub(r"\n+", "\n", text.strip())
- checks = cleaned_output.split("\n")
-
- patterns = [
- re.compile(r"(?:\d.\s*)?" + dimension + r":\s*(\d)")
- for dimension in self.dimensions
- ]
-
- advice = ""
- for check in reversed(checks):
- advice = check + advice
- if check.startswith("Advice:"):
- break
- checks[-1] = advice
- try:
- # find score and advice
- score = []
- for pattern in patterns:
- for check in checks[:-1]:
- if pattern.findall(check):
- score.append(bool(int(pattern.findall(check)[0])))
- break
- advice = re.findall(r"(?:\d.\s*)?Advice:\s*(.+)", checks[-1])[0]
- # logger.info("Evaluator give the following advice:\n" + advice)
- except (IndexError, ValueError):
- # logger.error("Bad response from evaluator!")
- raise OutputParserError(text)
- return score[0], advice
-
-
-@output_parser_registry.register("humaneval-evaluator-2")
-class HumanevalEvaluatorParser(OutputParser):
- dimensions: List[str] = None
-
- def parse(self, output: LLMResult) -> Tuple[List[int], str]:
- text = output.content
- pattern = re.compile(
- r"Response:(.+?)"
- + "".join(
- [
- f"{dimension}:(.+?)"
- if i != len(self.dimensions) - 1
- else f"{dimension}:(.+)"
- for i, dimension in enumerate(self.dimensions)
- ]
- ),
- re.DOTALL,
- )
- try:
- parsed_result = pattern.findall(text)[0]
- score = [bool(int(x.strip())) for x in parsed_result[1:]]
- advice = parsed_result[0].strip()
- except BaseException as e:
- raise OutputParserError(text)
- return score[0], advice
-
-
-@output_parser_registry.register("humaneval-critic")
-class HumanevalyCriticParser(OutputParser):
- def parse(self, output: LLMResult) -> AgentCriticism:
- text = output.content
- text = re.sub(r"\n+", "\n", text.strip())
- checks = text.split("\n")
- if not (checks[0].startswith("Action:")):
- raise OutputParserError(text)
- if checks[0].strip(". ") == "Action: Agree":
- return AgentCriticism(True, "")
- elif checks[0].strip(". ") == "Action: Disagree":
- pattern = re.compile(r"Action Input: ([\S\n ]+)")
- try:
- criticism = pattern.findall(text)[0].strip()
- except IndexError:
- # logger.error("Bad response from critic!")
- # raise OutputParserError(text)
- criticism = "I think the solution is not correct. Please think carefully and correct it."
- return AgentCriticism(False, criticism)
- else:
- raise OutputParserError(text)
-
-
-@output_parser_registry.register("humaneval-critic-agree")
-class HumanevalyCriticParser(OutputParser):
- def parse(self, output: LLMResult) -> AgentCriticism:
- text = output.content
- if "[Agree]" in text:
- return AgentCriticism(True, "")
- else:
- return AgentCriticism(False, text)
-
-
-@output_parser_registry.register("humaneval-critic-autogpt")
-class HumanevalCriticParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- try:
- parsed_result = re.findall(
- r"Text:(.+?)Reasoning:(.+?)Criticism:(.+?)Speak:(.+?)Final Decision:(.+)",
- text,
- re.DOTALL,
- )[0]
- except BaseException as e:
- raise OutputParserError(text)
- decision = parsed_result[-1].strip()
- if "[Agree]" in decision:
- return AgentCriticism(True, "")
- else:
- return AgentCriticism(False, parsed_result[-2].strip())
-
-
-@output_parser_registry.register("humaneval-critic-autogpt-2")
-class HumanevalCriticParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- try:
- parsed_result = re.findall(
- r"Problem Analysis:(.+?)Solution Analysis:(.+?)Decision:(.+?)Suggestion:(.+)",
- text,
- re.DOTALL,
- )[0]
- except BaseException as e:
- raise OutputParserError(text)
- decision = parsed_result[-2].strip()
- if "[Agree]" in decision:
- return AgentCriticism(True, "")
- else:
- return AgentCriticism(False, parsed_result[-1].strip())
diff --git a/agentverse/tasks/tasksolving/logic_grid/output_parser.py b/agentverse/tasks/tasksolving/logic_grid/output_parser.py
deleted file mode 100644
index 7f8a0d10d..000000000
--- a/agentverse/tasks/tasksolving/logic_grid/output_parser.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from __future__ import annotations
-
-import re
-from typing import Union, List, Tuple
-
-from agentverse.utils import AgentAction, AgentFinish, AgentCriticism
-
-from agentverse.parser import OutputParserError, output_parser_registry, OutputParser
-from agentverse.llms import LLMResult
-from agentverse.logging import get_logger
-
-
-logger = get_logger()
diff --git a/agentverse/tasks/tasksolving/mgsm/output_parser.py b/agentverse/tasks/tasksolving/mgsm/output_parser.py
deleted file mode 100644
index f7cd2c0f4..000000000
--- a/agentverse/tasks/tasksolving/mgsm/output_parser.py
+++ /dev/null
@@ -1,166 +0,0 @@
-from __future__ import annotations
-
-import re
-from typing import Union, List, Tuple
-
-from agentverse.utils import AgentAction, AgentFinish, AgentCriticism
-
-from agentverse.parser import OutputParserError, output_parser_registry, OutputParser
-from agentverse.llms import LLMResult
-from agentverse.logging import get_logger
-
-
-logger = get_logger()
-
-
-@output_parser_registry.register("mgsm")
-class MGSMParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- return AgentFinish({"output": output.content}, output.content)
-
-
-@output_parser_registry.register("mgsm-solver-autogpt")
-class MGSMSolverParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- try:
- parsed_result = re.findall(
- r"Thought:(.+?)Reasoning:(.+?)Criticism:(.+?)Solution:(.+)",
- text,
- re.DOTALL,
- )[0]
- except BaseException as e:
- raise OutputParserError(text)
- return AgentFinish({"output": re.sub(r"\n+", "\n", text.strip())}, text)
-
-
-@output_parser_registry.register("mgsm-evaluator")
-class MGSMEvaluatorParser(OutputParser):
- dimensions: List[str] = None
-
- def parse(self, output: LLMResult) -> Tuple[List[int], str]:
- text = output.content
- cleaned_output = re.sub(r"\n+", "\n", text.strip())
- # checks = cleaned_output.split("\n")
-
- patterns = [
- re.compile(r"(?:\d.\s*)?" + dimension + r":\s*(\d)")
- for dimension in self.dimensions
- ]
- try:
- # find score and advice
- score_num = [
- int(pattern.findall(cleaned_output)[0])
- for i, pattern in enumerate(patterns)
- ][0]
- if score_num == 0:
- score = False
- elif score_num == 1:
- score = True
- else:
- raise ValueError("Bad score!")
- pat = re.compile(r"(?:\d.\s*)?Response:\s*(.+)", re.DOTALL)
- advice = pat.findall(cleaned_output)[0]
- # logger.info("Evaluator give the following advice:\n" + advice)
- except (IndexError, ValueError):
- # logger.error("Bad response from evaluator!")
- raise OutputParserError(text)
- return score, advice
-
-
-@output_parser_registry.register("mgsm-evaluator-autogpt")
-class MGSMCriticParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- try:
- parsed_result = re.findall(
- r"Thought:(.+?)Reasoning:(.+?)Criticism:(.+?)Speak:(.+?)Correctness:(.+)",
- text,
- re.DOTALL,
- )[0]
- score = int(parsed_result[-1])
- advice = parsed_result[-2]
- except BaseException as e:
- raise OutputParserError(text)
- return score, advice
-
-
-@output_parser_registry.register("mgsm-critic")
-class MGSMCriticParser(OutputParser):
- def parse(self, output: LLMResult) -> AgentCriticism:
- text = output.content
- return AgentCriticism(False, text)
- # text = re.sub(r"\n+", "\n", text.strip())
- # checks = text.split("\n")
- # if not text.startswith("Thought:"):
- # raise OutputParserError(text)
- # if not (checks[0].startswith("Action:")):
- # raise OutputParserError(text)
- # if checks[0].strip(". ") == "Action: Agree":
- # return AgentCriticism(True, "")
- if "[Correct]" in text:
- return AgentCriticism(True, "")
- else:
- # pattern = re.compile(r"Action Input: ([\S\n ]+)")
- # try:
- # criticism = pattern.findall(text)[0].strip()
- # criticism = (
- # re.findall(r"Output:\S?(.+)", text)[0].replace("[Wrong]", "")
- # ).strip()
- criticism = text.replace("[Wrong]", "").strip()
- # except IndexError:
- # logger.error("Bad response from critic!")
- # raise OutputParserError(text)
- # criticism = "I think the solution is not correct. Please think carefully and correct it."
- return AgentCriticism(False, criticism)
- # else:
- # raise OutputParserError(text)
-
-
-@output_parser_registry.register("mgsm-critic-autogpt")
-class MGSMCriticParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- try:
- parsed_result = re.findall(
- r"Thought:(.+?)Reasoning:(.+?)Criticism:(.+?)Speak:(.+?)Decision:(.+)",
- text,
- re.DOTALL,
- )[0]
- except BaseException as e:
- raise OutputParserError(text)
- if "[Agree]" in parsed_result[-1]:
- return AgentCriticism(True, "")
- else:
- return AgentCriticism(False, parsed_result[-2])
-
-
-@output_parser_registry.register("mgsm-critic-agree")
-class MGSMCriticAgreeParser(OutputParser):
- def parse(self, output: LLMResult) -> AgentCriticism:
- text = output.content
- text = re.sub(r"\n+", "\n", text.strip())
- # checks = text.split("\n")
- # if not text.startswith("Thought:"):
- # raise OutputParserError(text)
- # if not (checks[0].startswith("Action:")):
- # raise OutputParserError(text)
- # if checks[0].strip(". ") == "Action: Agree":
- # return AgentCriticism(True, "")
- if "[Agree]" in text:
- return AgentCriticism(True, "")
- else:
- # pattern = re.compile(r"Action Input: ([\S\n ]+)")
- # try:
- # criticism = pattern.findall(text)[0].strip()
- # criticism = (
- # re.findall(r"Output:\S?(.+)", text)[0].replace("[Wrong]", "")
- # ).strip()
- criticism = text.replace("[Disagree]", "").strip()
- # except IndexError:
- # logger.error("Bad response from critic!")
- # raise OutputParserError(text)
- # criticism = "I think the solution is not correct. Please think carefully and correct it."
- return AgentCriticism(False, criticism)
- # else:
- # raise OutputParserError(text)
diff --git a/agentverse/tasks/tasksolving/pythoncalculator/output_parser.py b/agentverse/tasks/tasksolving/pythoncalculator/output_parser.py
deleted file mode 100644
index e37a808f6..000000000
--- a/agentverse/tasks/tasksolving/pythoncalculator/output_parser.py
+++ /dev/null
@@ -1,19 +0,0 @@
-from __future__ import annotations
-
-import re
-from typing import Union, List, Tuple
-
-from agentverse.utils import AgentAction, AgentFinish, AgentCriticism
-
-from agentverse.parser import OutputParserError, output_parser_registry, OutputParser
-from agentverse.llms import LLMResult
-from agentverse.logging import get_logger
-
-
-logger = get_logger()
-
-
-@output_parser_registry.register("dummy")
-class PipelinePythoncalculatorParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- return AgentFinish({"output": output.content}, output.content)
diff --git a/agentverse/tasks/tasksolving/responsegen/output_parser.py b/agentverse/tasks/tasksolving/responsegen/output_parser.py
deleted file mode 100644
index e24009f78..000000000
--- a/agentverse/tasks/tasksolving/responsegen/output_parser.py
+++ /dev/null
@@ -1,104 +0,0 @@
-from __future__ import annotations
-
-import re
-from typing import Union, List, Tuple
-
-from agentverse.utils import AgentAction, AgentFinish, AgentCriticism
-
-from agentverse.parser import OutputParserError, output_parser_registry, OutputParser
-from agentverse.llms import LLMResult
-from agentverse.logging import get_logger
-
-logger = get_logger()
-
-
-@output_parser_registry.register("responsegen")
-class ResponseGenParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- return AgentFinish({"output": output.content}, output.content)
-
-
-@output_parser_registry.register("responsegen-evaluator")
-class ResponseGenEvaluatorParser(OutputParser):
- dimensions: List[str] = None
-
- def parse(self, output: LLMResult) -> Tuple[List[int], str]:
- text = output.content
- cleaned_output = re.sub(r"\n+", "\n", text.strip())
- checks = cleaned_output.split("\n")
-
- patterns = [
- re.compile(r"(?:\d.\s*)?" + dimension + r":\s*(\d+)")
- for dimension in self.dimensions
- ]
-
- advice = ""
- for check in reversed(checks):
- advice = check + advice
- if check.startswith("Advice:"):
- break
- checks[-1] = advice
- try:
- # find score and advice
- score = [
- int(pattern.findall(checks[i])[0]) for i, pattern in enumerate(patterns)
- ]
- advice = re.findall(r"(?:\d.\s*)?Advice:\s*(.+)", checks[-1])[0]
- # logger.info("Evaluator give the following advice:\n" + advice)
- except (IndexError, ValueError):
- # logger.error("Bad response from evaluator!")
- raise OutputParserError(text)
- return score, advice
-
-
-@output_parser_registry.register("responsegen-critic")
-class ResponseGenCriticParser(OutputParser):
- def parse(self, output: LLMResult) -> AgentCriticism:
- text = output.content
- text = re.sub(r"\n+", "\n", text.strip())
- checks = text.split("\n")
- if not (checks[0].startswith("Action:")):
- raise OutputParserError(text)
- if checks[0].strip(". ") == "Action: Agree":
- return AgentCriticism(True, "")
- elif checks[0].strip(". ") == "Action: Disagree":
- pattern = re.compile(r"Action Input: ([\S\n ]+)")
- try:
- criticism = pattern.findall(text)[0].strip()
- except IndexError:
- criticism = "I think the can be further improved."
- # raise OutputParserError(text)
- return AgentCriticism(False, criticism)
- else:
- raise OutputParserError(text)
-
-
-@output_parser_registry.register("responsegen-critic-2")
-class ResponseGenCriticParser(OutputParser):
- def parse(self, output: LLMResult) -> AgentCriticism:
- text = output.content
- # text = re.sub(r"\n+", "\n", text.strip())
- # checks = text.split("\n")
- # if not (checks[0].startswith("Action:")):
- # raise OutputParserError(text)
- # if checks[0].strip(". ") == "Action: Agree":
- # return AgentCriticism(True, "")
- # elif checks[0].strip(". ") == "Action: Disagree":
- # pattern = re.compile(r"Action Input: ([\S\n ]+)")
- # try:
- # criticism = pattern.findall(text)[0].strip()
- # except IndexError:
- # # criticism = "I think the solution is not correct. Please think carefully and correct it."
- # raise OutputParserError(text)
- # return AgentCriticism(False, criticism)
- # else:
- # raise OutputParserError(text)
- result = re.findall(r"Decision:(.+?)Response:(.+)", text, re.DOTALL)
- if len(result) == 0:
- result = ["Disagree", "I think the response can be further improved."]
- else:
- result = result[0]
- if "Agree" in result[0]:
- return AgentCriticism(True, "")
- else:
- return AgentCriticism(False, result[1].strip())
diff --git a/agentverse/tasks/tasksolving/tool_using/24point/config.yaml b/agentverse/tasks/tasksolving/tool_using/24point/config.yaml
index 4e65b3f5b..2ac61f385 100644
--- a/agentverse/tasks/tasksolving/tool_using/24point/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/24point/config.yaml
@@ -153,7 +153,7 @@ agents:
temperature: 0
max_tokens: 512
output_parser:
- type: role_description_name_assigner
+ type: role-description-name-assigner
- #solver_agent:
agent_type: solver
diff --git a/agentverse/tasks/tasksolving/tool_using/bmi/config.yaml b/agentverse/tasks/tasksolving/tool_using/bmi/config.yaml
index 97411d782..d950f8a29 100644
--- a/agentverse/tasks/tasksolving/tool_using/bmi/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/bmi/config.yaml
@@ -153,7 +153,7 @@ agents:
temperature: 0
max_tokens: 512
output_parser:
- type: role_description_name_assigner
+ type: role-description-name-assigner
- #solver_agent:
agent_type: solver
diff --git a/agentverse/tasks/tasksolving/tool_using/bookclub/config.yaml b/agentverse/tasks/tasksolving/tool_using/bookclub/config.yaml
index abc03fe11..381bb99e2 100644
--- a/agentverse/tasks/tasksolving/tool_using/bookclub/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/bookclub/config.yaml
@@ -153,7 +153,7 @@ agents:
temperature: 0
max_tokens: 512
output_parser:
- type: role_description_name_assigner
+ type: role-description-name-assigner
- #solver_agent:
agent_type: solver
diff --git a/agentverse/tasks/tasksolving/tool_using/car/config.yaml b/agentverse/tasks/tasksolving/tool_using/car/config.yaml
index 4344c707e..c2a3ddddf 100644
--- a/agentverse/tasks/tasksolving/tool_using/car/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/car/config.yaml
@@ -153,7 +153,7 @@ agents:
temperature: 0
max_tokens: 512
output_parser:
- type: role_description_name_assigner
+ type: role-description-name-assigner
- #solver_agent:
agent_type: solver
diff --git a/agentverse/tasks/tasksolving/tool_using/date/config.yaml b/agentverse/tasks/tasksolving/tool_using/date/config.yaml
index 6e12f1746..46865cde2 100644
--- a/agentverse/tasks/tasksolving/tool_using/date/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/date/config.yaml
@@ -155,7 +155,7 @@ agents:
temperature: 0
max_tokens: 512
output_parser:
- type: role_description_name_assigner
+ type: role-description-name-assigner
- #solver_agent:
agent_type: solver
diff --git a/agentverse/tasks/tasksolving/tool_using/diy/config.yaml b/agentverse/tasks/tasksolving/tool_using/diy/config.yaml
index 8fa2f173c..ca012db37 100644
--- a/agentverse/tasks/tasksolving/tool_using/diy/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/diy/config.yaml
@@ -155,7 +155,7 @@ agents:
temperature: 0
max_tokens: 512
output_parser:
- type: role_description_name_assigner
+ type: role-description-name-assigner
- #solver_agent:
agent_type: solver
diff --git a/agentverse/tasks/tasksolving/tool_using/output_parser.py b/agentverse/tasks/tasksolving/tool_using/output_parser.py
deleted file mode 100644
index f3b38450d..000000000
--- a/agentverse/tasks/tasksolving/tool_using/output_parser.py
+++ /dev/null
@@ -1,77 +0,0 @@
-from __future__ import annotations
-
-import re
-import ast
-from typing import Union, List, Tuple
-
-from agentverse.utils import AgentAction, AgentFinish, AgentCriticism
-
-from agentverse.parser import OutputParserError, output_parser_registry, OutputParser
-from agentverse.llms import LLMResult
-from agentverse.logging import get_logger
-
-logger = get_logger()
-
-
-@output_parser_registry.register("role_description_name_assigner")
-class RoleAssignerParser(OutputParser):
- cnt_critic_agents: int = 0
-
- def parse(self, output: LLMResult) -> List[str]:
- text = output.content
- pattern = re.compile(r"\d+?\.\s*(.+?) - (.+)")
- roles = pattern.findall(text)
- if len(roles) < self.cnt_critic_agents:
- logger.error(
- f"Role assigner failed to assign roles to {self.cnt_critic_agents} critics!"
- )
- raise OutputParserError(text)
- res = []
- for role in roles:
- res.append({"name": role[0], "description": role[1]})
- return res
-
-
-@output_parser_registry.register("tool-using-solver")
-class SolverParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- pattern = re.compile(r"\d+?\.\s*(.+?) - (.+)")
- tasks = pattern.findall(text)
- if len(tasks) == 0:
- raise OutputParserError(text)
- return AgentFinish({"output": tasks}, text)
-
-
-@output_parser_registry.register("tool-using-executor")
-class ToolUsingSolverParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- if output.function_name != "":
- return AgentAction(
- tool=output.function_name,
- tool_input=output.function_arguments,
- log=output.content,
- )
- else:
- return AgentFinish({"output": output.content}, output.content)
-
-
-@output_parser_registry.register("tool-using-evaluator")
-class HumanevalEvaluatorParser(OutputParser):
- def parse(self, output: LLMResult) -> Tuple[List[int], str]:
- text = output.content
- try:
- result = re.findall(r"Status:(.+?)Speak:(.+)", text, re.DOTALL)[0]
- score = bool(int(result[0]))
- words = result[1].strip()
- except (IndexError, ValueError):
- # logger.error("Bad response from evaluator!")
- raise OutputParserError(text)
- return score, words
-
-
-@output_parser_registry.register("tool-using-critic")
-class ToolUsingCriticParser(OutputParser):
- def parse(self, output: LLMResult) -> AgentCriticism:
- text = output.content
- return AgentCriticism(False, text)
diff --git a/agentverse/tasks/tasksolving/tool_using/party/config.yaml b/agentverse/tasks/tasksolving/tool_using/party/config.yaml
index df7fad0bb..22b4be5e6 100644
--- a/agentverse/tasks/tasksolving/tool_using/party/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/party/config.yaml
@@ -155,7 +155,7 @@ agents:
temperature: 0
max_tokens: 512
output_parser:
- type: role_description_name_assigner
+ type: role-description-name-assigner
- #solver_agent:
agent_type: solver
diff --git a/agentverse/tasks/tasksolving/tool_using/sudoku/config.yaml b/agentverse/tasks/tasksolving/tool_using/sudoku/config.yaml
index 4d1202028..3fc1a615a 100644
--- a/agentverse/tasks/tasksolving/tool_using/sudoku/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/sudoku/config.yaml
@@ -153,7 +153,7 @@ agents:
temperature: 0
max_tokens: 512
output_parser:
- type: role_description_name_assigner
+ type: role-description-name-assigner
- #solver_agent:
agent_type: solver
diff --git a/agentverse/tasks/tasksolving/tool_using/trending/config.yaml b/agentverse/tasks/tasksolving/tool_using/trending/config.yaml
index 101612774..25a778fd3 100644
--- a/agentverse/tasks/tasksolving/tool_using/trending/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/trending/config.yaml
@@ -153,7 +153,7 @@ agents:
temperature: 0
max_tokens: 512
output_parser:
- type: role_description_name_assigner
+ type: role-description-name-assigner
- #solver_agent:
agent_type: solver
diff --git a/agentverse/tasks/tasksolving/tool_using/vacation/config.yaml b/agentverse/tasks/tasksolving/tool_using/vacation/config.yaml
index c10dd1ed0..04925ff89 100644
--- a/agentverse/tasks/tasksolving/tool_using/vacation/config.yaml
+++ b/agentverse/tasks/tasksolving/tool_using/vacation/config.yaml
@@ -153,7 +153,7 @@ agents:
temperature: 0
max_tokens: 512
output_parser:
- type: role_description_name_assigner
+ type: role-description-name-assigner
- #solver_agent:
agent_type: solver
From 7739cba39b8f4f4d17b70810a2ed92a79afdc425 Mon Sep 17 00:00:00 2001
From: chenweize1998
Date: Thu, 12 Oct 2023 19:43:35 +0800
Subject: [PATCH 024/101] bump version to 0.1.8 #60
---
agentverse/agents/simulation_agent/__init__.py | 0
agentverse/environments/simulation_env/__init__.py | 0
agentverse/environments/tasksolving_env/__init__.py | 0
agentverse/llms/utils/__init__.py | 1 +
setup.py | 2 +-
5 files changed, 2 insertions(+), 1 deletion(-)
create mode 100644 agentverse/agents/simulation_agent/__init__.py
create mode 100644 agentverse/environments/simulation_env/__init__.py
create mode 100644 agentverse/environments/tasksolving_env/__init__.py
create mode 100644 agentverse/llms/utils/__init__.py
diff --git a/agentverse/agents/simulation_agent/__init__.py b/agentverse/agents/simulation_agent/__init__.py
new file mode 100644
index 000000000..e69de29bb
diff --git a/agentverse/environments/simulation_env/__init__.py b/agentverse/environments/simulation_env/__init__.py
new file mode 100644
index 000000000..e69de29bb
diff --git a/agentverse/environments/tasksolving_env/__init__.py b/agentverse/environments/tasksolving_env/__init__.py
new file mode 100644
index 000000000..e69de29bb
diff --git a/agentverse/llms/utils/__init__.py b/agentverse/llms/utils/__init__.py
new file mode 100644
index 000000000..e122a4906
--- /dev/null
+++ b/agentverse/llms/utils/__init__.py
@@ -0,0 +1 @@
+from . jsonrepair import JsonRepair
diff --git a/setup.py b/setup.py
index bef006aea..1632a1fe5 100644
--- a/setup.py
+++ b/setup.py
@@ -10,7 +10,7 @@
setuptools.setup(
name="agentverse",
- version="0.1.5",
+ version="0.1.8",
author="OpenBMB",
author_email="chenweize1998@gmail.com",
description="A versatile framework that streamlines the process of creating custom multi-agent environments for large language models (LLMs).",
From 7aeed342e9a35cd326e09c10c9dc3fd669e84165 Mon Sep 17 00:00:00 2001
From: yitong <2969413251@qq.com>
Date: Fri, 13 Oct 2023 19:34:10 +0800
Subject: [PATCH 025/101] fix: fix a bug about updating kwargs
---
agentverse/agents/tasksolving_agent/critic.py | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/agentverse/agents/tasksolving_agent/critic.py b/agentverse/agents/tasksolving_agent/critic.py
index f6aeae3ca..6cbe46148 100644
--- a/agentverse/agents/tasksolving_agent/critic.py
+++ b/agentverse/agents/tasksolving_agent/critic.py
@@ -38,9 +38,9 @@ def __init__(self, *args, **kwargs):
tool_descriptions = "\n".join(
[f"- {t['name']}: " + t["description"] for t in tools]
)
- kwargs.update('tools', tools)
- kwargs.update('tool_names', tool_names)
- kwargs.update('tool_descriptions', tool_descriptions)
+ kwargs.update({"tools": tools})
+ kwargs.update({"tool_names": tool_names})
+ kwargs.update({"tool_descriptions": tool_descriptions})
except Exception as e:
logger.error(e)
logger.warn("Failed to load tool config file.")
From af343008dc904a32c6a2245df49805b5076f062e Mon Sep 17 00:00:00 2001
From: Yusheng Su
Date: Sat, 14 Oct 2023 19:43:10 +0400
Subject: [PATCH 026/101] update
---
README.md | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/README.md b/README.md
index 4e448d6cb..564b01009 100644
--- a/README.md
+++ b/README.md
@@ -33,6 +33,10 @@
**AgentVerse** offers a versatile framework that streamlines the process of creating custom multi-agent environments for large language models (LLMs). Designed to facilitate swift development and customization with minimal effort, our framework empowers researchers to concentrate on their research, rather than being bogged down by implementation details.
+**AgentVerse** is designed to facilitate the deployment of mutiple LLM-based agents in various applications.
+
+
+
⚠️⚠️⚠️ We're refactoring the code, and the goal is to provide a flexibility to construct simulation(without a predefined goal) and task-solving(with a specific goal) environments. Please note that this README is slightly outdated, we will update it soon. If you require a stable version that exclusively supports simulation environments, you can use [`release-0.1`](https://github.com/OpenBMB/AgentVerse/tree/release-0.1) branch.
---
From 0a1a225715dabaa61ec24798890797ce53334e79 Mon Sep 17 00:00:00 2001
From: Yusheng Su
Date: Sat, 14 Oct 2023 20:12:38 +0400
Subject: [PATCH 027/101] update
---
README.md | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/README.md b/README.md
index 564b01009..ec14a4991 100644
--- a/README.md
+++ b/README.md
@@ -33,7 +33,9 @@
**AgentVerse** offers a versatile framework that streamlines the process of creating custom multi-agent environments for large language models (LLMs). Designed to facilitate swift development and customization with minimal effort, our framework empowers researchers to concentrate on their research, rather than being bogged down by implementation details.
-**AgentVerse** is designed to facilitate the deployment of mutiple LLM-based agents in various applications.
+**AgentVerse** is designed to facilitate the deployment of mutiple LLM-based agents in various applications. AgentVerse primarily provides two frameworks: simulation and task-solving.
+- Task-solving: This framework assembles multiple agents as an automatic multi-agent system, such as software development systems, consulting systems, etc., to accomplish the corresponding tasks.
+- Simulation:
From 362dfce5b6977174b6056f672e121af5b147f8a3 Mon Sep 17 00:00:00 2001
From: Yusheng Su
Date: Sun, 15 Oct 2023 00:13:09 +0400
Subject: [PATCH 028/101] update
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index ec14a4991..2054d8649 100644
--- a/README.md
+++ b/README.md
@@ -33,7 +33,7 @@
**AgentVerse** offers a versatile framework that streamlines the process of creating custom multi-agent environments for large language models (LLMs). Designed to facilitate swift development and customization with minimal effort, our framework empowers researchers to concentrate on their research, rather than being bogged down by implementation details.
-**AgentVerse** is designed to facilitate the deployment of mutiple LLM-based agents in various applications. AgentVerse primarily provides two frameworks: simulation and task-solving.
+**AgentVerse** is designed to facilitate the deployment of mutiple LLM-based agents in various applications. AgentVerse primarily provides two frameworks: **simulation** and **task-solving**.
- Task-solving: This framework assembles multiple agents as an automatic multi-agent system, such as software development systems, consulting systems, etc., to accomplish the corresponding tasks.
- Simulation:
From 86f10c35195b3f8a46e04c48ebdfd029e80d10dd Mon Sep 17 00:00:00 2001
From: Yusheng Su
Date: Sun, 15 Oct 2023 00:42:14 +0400
Subject: [PATCH 029/101] update
---
README.md | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/README.md b/README.md
index 2054d8649..0678602e6 100644
--- a/README.md
+++ b/README.md
@@ -31,12 +31,15 @@
【English | Chinese】
+
+
+**AgentVerse** is designed to facilitate the deployment of mutiple LLM-based agents in various applications. AgentVerse primarily provides two frameworks: **task-solving** and **simulation**.
-**AgentVerse** is designed to facilitate the deployment of mutiple LLM-based agents in various applications. AgentVerse primarily provides two frameworks: **simulation** and **task-solving**.
-- Task-solving: This framework assembles multiple agents as an automatic multi-agent system, such as software development systems, consulting systems, etc., to accomplish the corresponding tasks.
-- Simulation:
+- Task-solving: This framework assembles multiple agents as an automatic multi-agent system ([Multi-agent as system](https://arxiv.org/abs/2309.02427)) to collaboratively accomplish the corresponding tasks. Applications: software development system, consulting system, etc.
+- Simulation: This framework allows users to set up custom environments to observe behaviors among, or interact with, multiple agents. Applications: game, social behavior research of LLM-based agents, etc.
⚠️⚠️⚠️ We're refactoring the code, and the goal is to provide a flexibility to construct simulation(without a predefined goal) and task-solving(with a specific goal) environments. Please note that this README is slightly outdated, we will update it soon. If you require a stable version that exclusively supports simulation environments, you can use [`release-0.1`](https://github.com/OpenBMB/AgentVerse/tree/release-0.1) branch.
From 1b2a86bcf3f9713e0abd4f1cb5e43f58b8ffc85a Mon Sep 17 00:00:00 2001
From: Yusheng Su
Date: Sun, 15 Oct 2023 00:47:06 +0400
Subject: [PATCH 030/101] update
---
README.md | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/README.md b/README.md
index 0678602e6..df27003bb 100644
--- a/README.md
+++ b/README.md
@@ -39,10 +39,12 @@
- Task-solving: This framework assembles multiple agents as an automatic multi-agent system ([Multi-agent as system](https://arxiv.org/abs/2309.02427)) to collaboratively accomplish the corresponding tasks. Applications: software development system, consulting system, etc.
-- Simulation: This framework allows users to set up custom environments to observe behaviors among, or interact with, multiple agents. Applications: game, social behavior research of LLM-based agents, etc.
-
+- Simulation: This framework allows users to set up custom environments to observe behaviors among, or interact with, multiple agents.(⚠️⚠️⚠️ We're refactoring the code. If you require a stable version that exclusively supports simulation framework, you can use [`release-0.1`](https://github.com/OpenBMB/AgentVerse/tree/release-0.1) branch.
+) Applications: game, social behavior research of LLM-based agents, etc.
+
---
From f1261cf83edae40996915312fa9740fac58d0307 Mon Sep 17 00:00:00 2001
From: Yusheng Su
Date: Sun, 15 Oct 2023 00:50:30 +0400
Subject: [PATCH 031/101] update
---
README.md | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/README.md b/README.md
index df27003bb..e1a6a8146 100644
--- a/README.md
+++ b/README.md
@@ -39,9 +39,15 @@
- Task-solving: This framework assembles multiple agents as an automatic multi-agent system ([Multi-agent as system](https://arxiv.org/abs/2309.02427)) to collaboratively accomplish the corresponding tasks. Applications: software development system, consulting system, etc.
+
+
+
+
- Simulation: This framework allows users to set up custom environments to observe behaviors among, or interact with, multiple agents.(⚠️⚠️⚠️ We're refactoring the code. If you require a stable version that exclusively supports simulation framework, you can use [`release-0.1`](https://github.com/OpenBMB/AgentVerse/tree/release-0.1) branch.
) Applications: game, social behavior research of LLM-based agents, etc.
+https://github.com/OpenBMB/AgentVerse/assets/11704492/4d07da68-f942-4205-b558-f155e95782e7
+
@@ -60,9 +66,6 @@
- [2023/10/5] 💡 We release the code of our paper [AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents](https://arxiv.org/abs/2308.10848), and refactor our codebase to enable the creation of both simulation and task-solving environment! We have placed the code for Minecraft example in the paper at the [`minecraft`](https://github.com/OpenBMB/AgentVerse/tree/minecraft) branch. Our tool-using example will soon be updated to the `main` branch. Stay tuned!
- [2023/8/22] 📝 We're excited to share our work-in-progress paper [AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents](https://arxiv.org/abs/2308.10848) related to this repository.
-
-
-
- [2023/6/5] 🎉 We are thrilled to present an array of [demos](#-simple-demo-video), including [NLP Classroom](#nlp-classroom), [Prisoner Dilemma](#prisoner-dilemma), [Software Design](#software-design), [Database Administrator](#database-administrator-dba), and a simple [H5 Pokemon Game](#pokemon) that enables the interaction with the characters in Pokemon! Try out these demos and have fun!
- [2023/5/1] 🚀 [AgentVerse](https://github.com/OpenBMB/AgentVerse) is officially launched!
@@ -92,12 +95,7 @@ Also, if you're passionate about advancing the frontiers of multi-agent environm
## 👾 Simple Demo Video
We demonstrate the following cases that are expertly crafted by AgentVerse.
-
-
-
#### NLP Classroom
In the NLP class, the professor and students engage in interactive communication. When students have a question, they raise their hands and patiently wait for the professor to call on them. Only after being called on by the professor, can students speak and ask their questions.
From e477d7e4f16b1111d148226c8d700dbf657b1807 Mon Sep 17 00:00:00 2001
From: Yusheng Su
Date: Sun, 15 Oct 2023 00:51:15 +0400
Subject: [PATCH 032/101] update
---
README.md | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/README.md b/README.md
index e1a6a8146..cc887baf6 100644
--- a/README.md
+++ b/README.md
@@ -37,14 +37,16 @@
**AgentVerse** is designed to facilitate the deployment of mutiple LLM-based agents in various applications. AgentVerse primarily provides two frameworks: **task-solving** and **simulation**.
-- Task-solving: This framework assembles multiple agents as an automatic multi-agent system ([Multi-agent as system](https://arxiv.org/abs/2309.02427)) to collaboratively accomplish the corresponding tasks. Applications: software development system, consulting system, etc.
+- Task-solving: This framework assembles multiple agents as an automatic multi-agent system ([Multi-agent as system](https://arxiv.org/abs/2309.02427)) to collaboratively accomplish the corresponding tasks.
+Applications: software development system, consulting system, etc.
- Simulation: This framework allows users to set up custom environments to observe behaviors among, or interact with, multiple agents.(⚠️⚠️⚠️ We're refactoring the code. If you require a stable version that exclusively supports simulation framework, you can use [`release-0.1`](https://github.com/OpenBMB/AgentVerse/tree/release-0.1) branch.
-) Applications: game, social behavior research of LLM-based agents, etc.
+)
+Applications: game, social behavior research of LLM-based agents, etc.
https://github.com/OpenBMB/AgentVerse/assets/11704492/4d07da68-f942-4205-b558-f155e95782e7
From 0b1ed5e99324258a7106a4b56440292f0f41bcc4 Mon Sep 17 00:00:00 2001
From: Yusheng Su
Date: Sun, 15 Oct 2023 00:52:22 +0400
Subject: [PATCH 033/101] update
---
README.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/README.md b/README.md
index cc887baf6..2d4c89145 100644
--- a/README.md
+++ b/README.md
@@ -1,8 +1,10 @@
-
-
**AgentVerse** is designed to facilitate the deployment of mutiple LLM-based agents in various applications. AgentVerse primarily provides two frameworks: **task-solving** and **simulation**.
- Task-solving: This framework assembles multiple agents as an automatic multi-agent system ([Multi-agent as system](https://arxiv.org/abs/2309.02427)) to collaboratively accomplish the corresponding tasks.
@@ -52,10 +48,6 @@ Applications: game, social behavior research of LLM-based agents, etc.
https://github.com/OpenBMB/AgentVerse/assets/11704492/4d07da68-f942-4205-b558-f155e95782e7
-
-
---
@@ -80,7 +72,7 @@ AgentVerse is on a mission to revolutionize the multi-agent environment for larg
- **Feedback and Suggestions**: Use AgentVerse and provide us with feedback. Your insights can lead to potential improvements and ensure that our framework remains top-notch.
-Also, if you're passionate about advancing the frontiers of multi-agent environments and are eager to dive deeper into research, we invite you to join our team at THUNLP. To explore this exciting opportunity and embark on a collaborative journey with us, please reach out to [chenweize1998@gmail.com](chenweize1998@gmail.com) and [yushengsu.thu@gmail.com](yushengsu.thu@gmail.com) and express your interest. We're keen to welcome motivated individuals like you to our lab!
+Also, if you're passionate about advancing the frontiers of multi-agent applications and are eager to dive deeper into research, we invite you to join AgentVerse team. Please reach out [AgentVerse Team](agentverse2@gmail.com) and CC to [chenweize1998@gmail.com](chenweize1998@gmail.com) and [yushengsu.thu@gmail.com](yushengsu.thu@gmail.com). We're keen to welcome motivated individuals like you to our team!
👉Also, check our Discord: https://discord.gg/cnutfCtC.
@@ -423,7 +415,11 @@ If you find this repo helpful, feel free to cite us.
## Contact
-Weize Chen: chenweize1998@gmail.com
+AgentVerse Team: agentverse2@gmail.com
+
+Project leaders:
+
+- Weize Chen: chenweize1998@gmail.com
-[Yusheng Su](https://yushengsu-thu.github.io/): yushengsu.thu@gmail.com
+- [Yusheng Su](https://yushengsu-thu.github.io/): yushengsu.thu@gmail.com
From bc9216e543c32c3505c3340843b0afd0fa2ffbd4 Mon Sep 17 00:00:00 2001
From: Yusheng Su
Date: Sun, 15 Oct 2023 01:12:41 +0400
Subject: [PATCH 036/101] update
---
README.md | 41 ++++++++++++++++++++++++++++-------------
1 file changed, 28 insertions(+), 13 deletions(-)
diff --git a/README.md b/README.md
index b1f908857..f29fad997 100644
--- a/README.md
+++ b/README.md
@@ -60,21 +60,8 @@ in detail of AgentVerse.
- [2023/6/5] We are thrilled to present an array of [demos](#-simple-demo-video), including [NLP Classroom](#nlp-classroom), [Prisoner Dilemma](#prisoner-dilemma), [Software Design](#software-design), [Database Administrator](#database-administrator-dba), and a simple [H5 Pokemon Game](#pokemon) that enables the interaction with the characters in Pokemon! Try out these demos and have fun!
- [2023/5/1] 🚀 [AgentVerse](https://github.com/OpenBMB/AgentVerse) is officially launched!
-## 🌟 Join Us!
-AgentVerse is on a mission to revolutionize the multi-agent environment for large language models, and we're eagerly looking for passionate collaborators to join us on this exciting journey.
-
-### How Can You Contribute?
-- **Code Development**: If you're an engineer, help us refine, optimize, and expand the current framework. We're always looking for talented developers to enhance our existing features and develop new modules.
-- **Documentation and Tutorials**: If you have a knack for writing, help us improve our documentation, create tutorials, or write blog posts to make AgentVerse more accessible to the broader community.
-- **Application Exploration**: If you're intrigued by multi-agent applications and are eager to experiment using AgentVerse, we'd be thrilled to support your journey and see what you create!
-
-- **Feedback and Suggestions**: Use AgentVerse and provide us with feedback. Your insights can lead to potential improvements and ensure that our framework remains top-notch.
-
-Also, if you're passionate about advancing the frontiers of multi-agent applications and are eager to dive deeper into research, we invite you to join AgentVerse team. Please reach out [AgentVerse Team](agentverse2@gmail.com) and CC to [chenweize1998@gmail.com](chenweize1998@gmail.com) and [yushengsu.thu@gmail.com](yushengsu.thu@gmail.com). We're keen to welcome motivated individuals like you to our team!
-
-👉Also, check our Discord: https://discord.gg/cnutfCtC.
## 🗓 Coming Soon
- [x] Code release of our [paper](https://arxiv.org/abs/2308.10848)
@@ -277,6 +264,7 @@ agentverse-tasksolving --task tasksolving/humaneval/gpt-3.5 --dataset_path data/
You can take a look at `agentverse/tasks/tasksolving` for more experiments we have done in our paper.
+
+
+
+
+
+
+## 🌟 Join Us!
+AgentVerse is on a mission to revolutionize the multi-agent environment for large language models, and we're eagerly looking for passionate collaborators to join us on this exciting journey.
+
+### How Can You Contribute?
+- **Code Development**: If you're an engineer, help us refine, optimize, and expand the current framework. We're always looking for talented developers to enhance our existing features and develop new modules.
+
+- **Documentation and Tutorials**: If you have a knack for writing, help us improve our documentation, create tutorials, or write blog posts to make AgentVerse more accessible to the broader community.
+
+- **Application Exploration**: If you're intrigued by multi-agent applications and are eager to experiment using AgentVerse, we'd be thrilled to support your journey and see what you create!
+
+- **Feedback and Suggestions**: Use AgentVerse and provide us with feedback. Your insights can lead to potential improvements and ensure that our framework remains top-notch.
+
+Also, if you're passionate about advancing the frontiers of multi-agent applications and are eager to dive deeper into research, we invite you to join AgentVerse team. Please reach out [AgentVerse Team](agentverse2@gmail.com) and CC to [chenweize1998@gmail.com](chenweize1998@gmail.com) and [yushengsu.thu@gmail.com](yushengsu.thu@gmail.com). We're keen to welcome motivated individuals like you to our team!
+
+
+### Social Media and Community
+
+- Twitter: https://twitter.com/Agentverse71134
+
+- Discord: https://discord.gg/cnutfCtC.
## Star History
From dd49c21c81157fdd454bfd0d6fdc8e68a56b2dbd Mon Sep 17 00:00:00 2001
From: Yusheng Su
Date: Sun, 15 Oct 2023 01:14:46 +0400
Subject: [PATCH 037/101] update
---
README.md | 26 ++++++++++++++++----------
1 file changed, 16 insertions(+), 10 deletions(-)
diff --git a/README.md b/README.md
index f29fad997..93baee406 100644
--- a/README.md
+++ b/README.md
@@ -70,6 +70,8 @@ in detail of AgentVerse.
- [ ] Add support for local LLM
+
## Contents
@@ -221,13 +224,7 @@ python setup.py develop
```
-
+
### Simulation CLI Example
@@ -264,6 +261,8 @@ agentverse-tasksolving --task tasksolving/humaneval/gpt-3.5 --dataset_path data/
You can take a look at `agentverse/tasks/tasksolving` for more experiments we have done in our paper.
+
+
-
+
+
+
+## Contents
+- [📰 What's New](#-whats-new)
+- [🗓 Coming Soon](#-coming-soon)
+- [Contents](#contents)
+- [🚀 Getting Started](#-getting-started)
+ - [Installation](#installation)
+ - [Simulation CLI Example](#simulation-cli-example)
+ - [Simulation Local Website Demo](#simulation-local-website-demo)
+ - [Task-Solving CLI Example](#task-solving-cli-example)
+- [🔎 Examples](#-examples)
+- [🌟 Join Us!](#-join-us)
+ - [How Can You Contribute?](#how-can-you-contribute)
+- [Social Media and Community](#social-media-and-community)
+- [Star History](#star-history)
+- [Citation](#citation)
+- [Contact](#contact)
+
+
## 🚀 Getting Started
@@ -224,8 +245,6 @@ python setup.py develop
```
-
-
### Simulation CLI Example
You can create a multi-agent environments provided by us. Using the classroom scenario as an example. In this scenario, there are nine agents, one playing the role of a professor and the other eight as students.
From d13c412a6eae45a59ce1ad824d68e4753ff342b9 Mon Sep 17 00:00:00 2001
From: Yusheng Su
Date: Sun, 15 Oct 2023 02:00:15 +0400
Subject: [PATCH 042/101] update
---
README.md | 8 ++++
README_simulation_cases.md | 75 +++++++++++++++++++++++++++++++++++++
README_tasksolving_cases.md | 0
setup.py | 4 +-
4 files changed, 85 insertions(+), 2 deletions(-)
create mode 100644 README_simulation_cases.md
create mode 100644 README_tasksolving_cases.md
diff --git a/README.md b/README.md
index a72a6db18..a8f876637 100644
--- a/README.md
+++ b/README.md
@@ -280,6 +280,14 @@ agentverse-tasksolving --task tasksolving/humaneval/gpt-3.5 --dataset_path data/
You can take a look at `agentverse/tasks/tasksolving` for more experiments we have done in our paper.
+## AgentVerse Showcases
+
+### Simulation Showcases
+Refer to
+
+### Task-Solving Showcases
+Refer to
+
-## Contents
+# Contents
- [📰 What's New](#-whats-new)
- [🗓 Coming Soon](#-coming-soon)
- [Contents](#contents)
@@ -207,9 +207,9 @@ https://github.com/OpenBMB/AgentVerse/assets/11704492/4d07da68-f942-4205-b558-f1
-## 🚀 Getting Started
+# 🚀 Getting Started
-### Installation
+## Installation
```bash
pip install -U agentverse
@@ -245,9 +245,9 @@ pip install -r requirements.txt
python setup.py develop
```
-### Simulation
+## Simulation
-#### Framework Required Modules
+### Framework Required Modules
```
- agentverse
- agents
@@ -256,7 +256,7 @@ python setup.py develop
- simulation_env
```
-#### CLI Example
+### CLI Example
You can create a multi-agent environments provided by us. Using the classroom scenario as an example. In this scenario, there are nine agents, one playing the role of a professor and the other eight as students.
@@ -266,7 +266,7 @@ python3 agentverse_command/main_simulation_cli.py --task simulation/nlp_classroo
agentverse-simulation --task simulation/nlp_classroom_9players
```
-#### GUI Example (Local)
+### GUI Example (Local)
We also provide a local website demo for this environment. You can launch it with
@@ -278,10 +278,10 @@ agentverse-simulation-gui --task simulation/nlp_classroom_9players
After successfully launching the local server, you can visit [http://127.0.0.1:7860/](http://127.0.0.1:7860/) to view the classroom environment.
-### Task-Solving
+## Task-Solving
-#### Framework Required Modules
+### Framework Required Modules
```
- agentverse
- agents
@@ -290,7 +290,7 @@ After successfully launching the local server, you can visit [http://127.0.0.1:7
- tasksolving_env
```
-#### CLI Example
+### CLI Example
To run the experiments with the task-solving environment proposed in our [paper](https://arxiv.org/abs/2308.10848), you can use the following command:
@@ -304,12 +304,12 @@ agentverse-tasksolving --task tasksolving/humaneval/gpt-3.5 --dataset_path data/
You can take a look at `agentverse/tasks/tasksolving` for more experiments we have done in our paper.
-## AgentVerse Showcases
+# AgentVerse Showcases
-### Simulation Showcases
+## Simulation Showcases
Refer to [simulation showcases](README_simulation_cases.md)
-### Task-Solving Showcases
+## Task-Solving Showcases
Refer to [tasksolving showcases](README_tasksolving_cases.md)
@@ -447,14 +447,14 @@ Here's a brief overview of each example:
-->
-## 🌟 Join Us!
+# 🌟 Join Us!
AgentVerse is on a mission to revolutionize the multi-agent environment for large language models, and we're eagerly looking for passionate collaborators to join us on this exciting journey.
-### Leaders
+## Leaders
-### Contributors
+## Contributors
@@ -469,7 +469,7 @@ AgentVerse is on a mission to revolutionize the multi-agent environment for larg
-### How Can You Contribute?
+## How Can You Contribute?
- **Code Development**: If you're an engineer, help us refine, optimize, and expand the current framework. We're always looking for talented developers to enhance our existing features and develop new modules.
- **Documentation and Tutorials**: If you have a knack for writing, help us improve our documentation, create tutorials, or write blog posts to make AgentVerse more accessible to the broader community.
@@ -481,14 +481,14 @@ AgentVerse is on a mission to revolutionize the multi-agent environment for larg
Also, if you're passionate about advancing the frontiers of multi-agent applications, become core AgentVerse team members, or are eager to dive deeper into agent research. Please reach out [AgentVerse Team](agentverse2@gmail.com), and CC to [chenweize1998@gmail.com](chenweize1998@gmail.com) and [yushengsu.thu@gmail.com](yushengsu.thu@gmail.com). We're keen to welcome motivated individuals like you to our team!
-### Social Media and Community
+## Social Media and Community
- Twitter: https://twitter.com/Agentverse71134
- Discord: https://discord.gg/cnutfCtC.
-## Star History
+# Star History
[![Star History Chart](https://api.star-history.com/svg?repos=OpenBMB/AgentVerse&type=Date)](https://star-history.com/#OpenBMB/AgentVerse&Date)
@@ -504,7 +504,7 @@ If you find this repo helpful, feel free to cite us.
}
```
-## Contact
+# Contact
AgentVerse Team: agentverse2@gmail.com
From 8a9f38ed6b74d7b186f2549049b8d4979d37e521 Mon Sep 17 00:00:00 2001
From: Yusheng Su
Date: Mon, 16 Oct 2023 23:12:36 +0400
Subject: [PATCH 055/101] Update README.md
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index fecfbcef5..869d9838f 100644
--- a/README.md
+++ b/README.md
@@ -478,7 +478,7 @@ AgentVerse is on a mission to revolutionize the multi-agent environment for larg
- **Feedback and Suggestions**: Use AgentVerse and provide us with feedback. Your insights can lead to potential improvements and ensure that our framework remains top-notch.
-Also, if you're passionate about advancing the frontiers of multi-agent applications, become core AgentVerse team members, or are eager to dive deeper into agent research. Please reach out [AgentVerse Team](agentverse2@gmail.com), and CC to [chenweize1998@gmail.com](chenweize1998@gmail.com) and [yushengsu.thu@gmail.com](yushengsu.thu@gmail.com). We're keen to welcome motivated individuals like you to our team!
+Also, if you're passionate about advancing the frontiers of multi-agent applications, become core AgentVerse team members, or are eager to dive deeper into agent research. Please reach out [AgentVerse Team](mailto:agentverse2@gmail.com?subject=[GitHub]%20Source%20Han%20Sans), and CC to [Weize Chen](mailto:chenweize1998@gmail.com?subject=[GitHub]%20Source%20Han%20Sans) and [Yusheng Su](mailto:yushengsu.thu@gmail.com?subject=[GitHub]%20Source%20Han%20Sans). We're keen to welcome motivated individuals like you to our team!
## Social Media and Community
From 31dc92d9a84e27af0ecd8edd3fbb723eaf809943 Mon Sep 17 00:00:00 2001
From: Yusheng Su
Date: Mon, 16 Oct 2023 23:14:21 +0400
Subject: [PATCH 056/101] Update README.md
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index 869d9838f..74fe2fe0b 100644
--- a/README.md
+++ b/README.md
@@ -478,7 +478,7 @@ AgentVerse is on a mission to revolutionize the multi-agent environment for larg
- **Feedback and Suggestions**: Use AgentVerse and provide us with feedback. Your insights can lead to potential improvements and ensure that our framework remains top-notch.
-Also, if you're passionate about advancing the frontiers of multi-agent applications, become core AgentVerse team members, or are eager to dive deeper into agent research. Please reach out [AgentVerse Team](mailto:agentverse2@gmail.com?subject=[GitHub]%20Source%20Han%20Sans), and CC to [Weize Chen](mailto:chenweize1998@gmail.com?subject=[GitHub]%20Source%20Han%20Sans) and [Yusheng Su](mailto:yushengsu.thu@gmail.com?subject=[GitHub]%20Source%20Han%20Sans). We're keen to welcome motivated individuals like you to our team!
+Also, if you're passionate about advancing the frontiers of multi-agent applications, become core AgentVerse team members, or are eager to dive deeper into agent research. Please reach out [AgentVerse Team](mailto:agentverse2@gmail.com?subject=[GitHub]%AgentVerse%20Project), and CC to [Weize Chen](mailto:chenweize1998@gmail.com?subject=[GitHub]%AgentVerse%20Project) and [Yusheng Su](mailto:yushengsu.thu@gmail.com?subject=[GitHub]%AgentVerse%20Project). We're keen to welcome motivated individuals like you to our team!
## Social Media and Community
From 649330412b588c50364a1a5776fda538948454c6 Mon Sep 17 00:00:00 2001
From: Ran Li <77495458+ASL-r@users.noreply.github.com>
Date: Tue, 17 Oct 2023 07:38:23 -0700
Subject: [PATCH 057/101] doc: modify README.md (#67) [ci skip]
---
README.md | 6 ++++++
README_zh.md | 5 +++++
2 files changed, 11 insertions(+)
diff --git a/README.md b/README.md
index 74fe2fe0b..b3bb5bb76 100644
--- a/README.md
+++ b/README.md
@@ -22,6 +22,9 @@
+
+
+
@@ -53,6 +56,8 @@ Applications: software development system, consulting system, etc.
# 📰 What's New
+- [2023/10/17] We're super excited to share our open-source AI community hugging face: [`AgentVerse`](https://huggingface.co/spaces/AgentVerse/agentVerse). You are able to try out the two simulation applications, NLP Classroom and Prisoner's Dilemma,with your code of the openai API key and the openai organization. Have fun!
+
- [2023/10/5] Re-factor our codebase to enable the deployment of both simulation and task-solving framework! We have placed the code for Minecraft example in the paper at the [`minecraft`](https://github.com/OpenBMB/AgentVerse/tree/minecraft) branch. Our tool-using example will soon be updated to the `main` branch. Stay tuned!
- [2023/8/22] We're excited to share our paper [AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents](https://arxiv.org/abs/2308.10848) that illistrate the task-solving framework
@@ -487,6 +492,7 @@ Also, if you're passionate about advancing the frontiers of multi-agent applicat
- Discord: https://discord.gg/cnutfCtC.
+- Hugging Face: https://huggingface.co/spaces/AgentVerse/agentVerse.
# Star History
diff --git a/README_zh.md b/README_zh.md
index 988dacaf5..bff9fbc8a 100644
--- a/README_zh.md
+++ b/README_zh.md
@@ -10,6 +10,9 @@
+
+
+
@@ -33,6 +36,8 @@
- 🛠 **工具(插件)利用**: AgentVerse支持多智能体环境的工具。目前,AgentVerse支持[BMTools](https://github.com/OpenBMB/BMTools)中提供的工具。
## 📰 最新消息
+- [2023/10/17] 我们很高兴来分享我们AI开源社区 hugging face: [`AgentVerse`](https://huggingface.co/spaces/AgentVerse/agentVerse). 在你提供openai API 密钥 and the openai 组织码之后,你可以体验NLP教室和囚徒困境两个模拟应用程序。祝你玩得开心!
+
- [2023/8/22] 📝 我们很高兴分享与此仓库相关的正在进行中的论文[AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents](https://arxiv.org/abs/2308.10848).
From b001a327d8346570e892632e8972e332cf0d8c3c Mon Sep 17 00:00:00 2001
From: chenweize1998
Date: Wed, 18 Oct 2023 16:31:50 +0800
Subject: [PATCH 058/101] update requirements.txt [ci skip]
---
requirements.txt | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/requirements.txt b/requirements.txt
index dc4985600..b6b24354b 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -6,7 +6,7 @@ iso-639
openai==0.27.8
opencv-python==4.8.0.76
gradio
-httpx[socks]
+httpx[socks]==0.25.0
astunparse
langchain==0.0.157
scikit-learn
@@ -16,4 +16,4 @@ typing-inspect==0.8.0
colorlog
rapidfuzz
spacy
-colorama==0.4.6
\ No newline at end of file
+colorama==0.4.6
From 53f2ec0e41a748ea05776f6ebd7cecd18f3f3402 Mon Sep 17 00:00:00 2001
From: chenweize1998
Date: Wed, 18 Oct 2023 16:58:09 +0800
Subject: [PATCH 059/101] fix: session expired bug in tool calling
---
.../environments/tasksolving_env/rules/executor/tool_using.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/agentverse/environments/tasksolving_env/rules/executor/tool_using.py b/agentverse/environments/tasksolving_env/rules/executor/tool_using.py
index 667de12f1..65bc8ff24 100644
--- a/agentverse/environments/tasksolving_env/rules/executor/tool_using.py
+++ b/agentverse/environments/tasksolving_env/rules/executor/tool_using.py
@@ -16,7 +16,6 @@
url = "http://127.0.0.1:8080"
-# url = "http://8.217.97.110:8080"
SUMMARIZE_PROMPT = """Here is the text gathered from a webpage, and a question you need to answer from the webpage.
-- Webpage --
@@ -255,7 +254,7 @@ async def _summarize_webpage(webpage, question):
for i in range(3):
try:
- async with ClientSession(cookies=cookies) as session:
+ async with ClientSession(cookies=cookies, trust_env=True) as session:
if cookies is None:
async with session.post(
f"{url}/get_cookie", timeout=30
@@ -285,6 +284,7 @@ async def _summarize_webpage(webpage, question):
) as response:
content = await response.text()
if command == "WebEnv_browse_website":
+ openai.aiosession.set(session)
content = await _summarize_webpage(
content, arguments["question"]
)
From 39faec3792214632c92881b51a718f2faf097d13 Mon Sep 17 00:00:00 2001
From: Yu Hsia <75243938+cheesewafer@users.noreply.github.com>
Date: Thu, 19 Oct 2023 10:31:02 +0800
Subject: [PATCH 060/101] feat: support local llms (#68)
* support local LLMs
---------
Co-authored-by: Yu Xia
Co-authored-by: Weize Chen <32613237+chenweize1998@users.noreply.github.com>
Co-authored-by: chenweize1998
---
agentverse/llms/openai.py | 8 +-
.../commongen/llama-2-7b-chat-hf/config.yaml | 197 ++++++++++++++++++
dataloader/commongen.py | 1 +
requirements.txt | 1 +
scripts/run_local_model_server.sh | 11 +
5 files changed, 217 insertions(+), 1 deletion(-)
create mode 100644 agentverse/tasks/tasksolving/commongen/llama-2-7b-chat-hf/config.yaml
create mode 100644 scripts/run_local_model_server.sh
diff --git a/agentverse/llms/openai.py b/agentverse/llms/openai.py
index 3b7409cf2..08547c300 100644
--- a/agentverse/llms/openai.py
+++ b/agentverse/llms/openai.py
@@ -92,10 +92,12 @@ class OpenAIChatArgs(BaseModelArgs):
# total_tokens=response["usage"]["total_tokens"],
# )
-
+# To support your own local LLMs, register it here and add it into LOCAL_LLMS.
+LOCAL_LLMS = ['llama-2-7b-chat-hf']
@llm_registry.register("gpt-35-turbo")
@llm_registry.register("gpt-3.5-turbo")
@llm_registry.register("gpt-4")
+@llm_registry.register("llama-2-7b-chat-hf")
class OpenAIChat(BaseChatModel):
args: OpenAIChatArgs = Field(default_factory=OpenAIChatArgs)
@@ -109,6 +111,8 @@ def __init__(self, max_retry: int = 3, **kwargs):
args[k] = kwargs.pop(k, v)
if len(kwargs) > 0:
logging.warning(f"Unused arguments: {kwargs}")
+ if args['model'] in LOCAL_LLMS:
+ openai.api_base = "http://localhost:5000/v1"
super().__init__(args=args, max_retry=max_retry)
# def _construct_messages(self, history: List[Message]):
@@ -301,6 +305,7 @@ def get_spend(self) -> int:
"gpt-4": 0.03,
"gpt-4-0613": 0.03,
"gpt-4-32k": 0.06,
+ "llama-2-7b-chat-hf": 0.0,
}
output_cost_map = {
@@ -311,6 +316,7 @@ def get_spend(self) -> int:
"gpt-4": 0.06,
"gpt-4-0613": 0.06,
"gpt-4-32k": 0.12,
+ "llama-2-7b-chat-hf": 0.0,
}
model = self.args.model
diff --git a/agentverse/tasks/tasksolving/commongen/llama-2-7b-chat-hf/config.yaml b/agentverse/tasks/tasksolving/commongen/llama-2-7b-chat-hf/config.yaml
new file mode 100644
index 000000000..8514b1004
--- /dev/null
+++ b/agentverse/tasks/tasksolving/commongen/llama-2-7b-chat-hf/config.yaml
@@ -0,0 +1,197 @@
+cnt_agents: &cnt_agents 2
+max_turn: &max_turn 3
+max_inner_turns: &max_inner_turns 3
+
+prompts:
+ role_assigner_prepend_prompt: &role_assigner_prepend_prompt |-
+
+ role_assigner_append_prompt: &role_assigner_append_prompt |-
+ # Role Description
+ You are the leader of a group of experts, now you need to recruit a small group of experts with diverse identity to generate coherent and grammatically correct sentences containing the following given words:
+ ${task_description}
+
+ You can recruit ${cnt_critic_agents} expert in different fields. What experts will you recruit?
+
+ # Response Format Guidance
+ You should respond with a list of expert description. For example:
+ 1. an electrical engineer specified in the filed of xxx.
+ 2. an economist who is good at xxx.
+ 3. a lawyer with a good knowledge of xxx.
+ ...
+
+ Only respond with the description of each role. Do not include your reason.
+
+ solver_prepend_prompt: &solver_prepend_prompt |-
+ You are ${role_description}. Generate a coherent and grammatically correct paragraph containing the following given words (or their variations):
+ WORDS:
+ ${task_description}
+
+ solver_append_prompt: &solver_append_prompt |-
+
+ critic_prepend_prompt: &critic_prepend_prompt |-
+ You are in a discussion group, aiming to generate coherent and grammatically correct sentences containing the following given words (or their variations):
+ WORDS:
+ ${task_description}
+
+ Below is the chat history in your group.
+
+ critic_append_prompt: &critic_append_prompt |-
+ You are ${role_description}. Based on your knowledge, can you check whether the latest provided paragraph contains all the given words or their variations? When responding, you should follow the following rules:
+ 1. If the above latest provided solution has covered all the given words or their variations, end your response with a special token "[Agree]".
+ 1. If not, double-check the above solutions, give your critics, and generate a better solution.
+
+ manager_prompt: &manager_prompt |-
+
+ executor_prepend_prompt: &executor_prepend_prompt |-
+
+ executor_append_prompt: &executor_append_prompt |-
+
+ evaluator_prepend_prompt: &evaluator_prepend_prompt |-
+
+ evaluator_append_prompt: &evaluator_append_prompt |-
+ You are a reviewer who checks whether a paragraph contains all the given words (including their variations). When some words are missing, you should patiently point out, and output a score of 0. When the paragraph contains all the words, you should output a score of 1.
+
+ WORDS:
+ ${task_description}
+
+ SOLUTION:
+ ```
+ ${solution}
+ ```
+
+ TEST RESULT:
+ ${result}
+
+ RESPONSE FORMAT:
+ You must respond in the following format:
+ Score: (0 or 1. 0 if there are some missing words, 1 if there is no missing words)
+ Advice: (point out all the missing words)
+
+
+name: pipeline
+
+
+environment:
+ env_type: task-basic
+ max_turn: *max_turn
+ rule:
+ role_assigner:
+ type: role_description
+ cnt_agents: *cnt_agents
+ decision_maker:
+ type: vertical-solver-first
+ max_inner_turns: *max_inner_turns
+ executor:
+ type: coverage-test
+ evaluator:
+ type: basic
+
+agents:
+ - #role_assigner_agent:
+ agent_type: role_assigner
+ name: role assigner
+ max_retry: 1000
+ prepend_prompt_template: *role_assigner_prepend_prompt
+ append_prompt_template: *role_assigner_append_prompt
+ memory:
+ memory_type: chat_history
+ llm:
+ llm_type: llama-2-7b-chat-hf
+ model: "llama-2-7b-chat-hf"
+ temperature: 0
+ max_tokens: 512
+ output_parser:
+ type: role_assigner
+
+ - #solver_agent:
+ agent_type: solver
+ name: Planner
+ max_retry: 1000
+ max_history: 4
+ prepend_prompt_template: *solver_prepend_prompt
+ append_prompt_template: *solver_append_prompt
+ memory:
+ memory_type: chat_history
+ llm:
+ llm_type: llama-2-7b-chat-hf
+ model: "llama-2-7b-chat-hf"
+ temperature: 0
+ max_tokens: 1024
+ output_parser:
+ type: commongen
+ # max_tokens: 1024
+ # stop:
+ # - "\ndef "
+ # - "\nclass "
+ # - "\nif "
+ # - "\n\n#"
+
+ - #critic_agents:
+ agent_type: critic
+ name: Critic 1
+ max_retry: 1000
+ max_history: 4
+ role_description: |-
+ Waiting to be assigned.
+ prepend_prompt_template: *critic_prepend_prompt
+ append_prompt_template: *critic_append_prompt
+ memory:
+ memory_type: chat_history
+ llm:
+ llm_type: llama-2-7b-chat-hf
+ model: "llama-2-7b-chat-hf"
+ temperature: 0
+ max_tokens: 1024
+ output_parser:
+ type: mgsm-critic-agree
+
+ - #executor_agent:
+ agent_type: executor
+ name: Executor
+ max_retry: 1000
+ prepend_prompt_template: *executor_prepend_prompt
+ append_prompt_template: *executor_append_prompt
+ memory:
+ memory_type: chat_history
+ llm:
+ llm_type: llama-2-7b-chat-hf
+ model: llama-2-7b-chat-hf
+ temperature: 0
+ max_tokens: 1024
+ output_parser:
+ type: commongen
+
+ - #evaluator_agent:
+ agent_type: evaluator
+ name: Evaluator
+ max_retry: 1000
+ role_description: |-
+ Evaluator
+ prepend_prompt_template: *evaluator_prepend_prompt
+ append_prompt_template: *evaluator_append_prompt
+ memory:
+ memory_type: chat_history
+ llm:
+ llm_type: llama-2-7b-chat-hf
+ model: llama-2-7b-chat-hf
+ temperature: 0.3
+ max_tokens: 1024
+ output_parser:
+ type: humaneval-evaluator
+ dimensions:
+ - Score
+
+ - #manager_agent:
+ agent_type: manager
+ name: Manager
+ max_retry: 1000
+ prompt_template: *manager_prompt
+ memory:
+ memory_type: chat_history
+ llm:
+ llm_type: llama-2-7b-chat-hf
+ model: "llama-2-7b-chat-hf"
+ temperature: 0
+ max_tokens: 1024
+ output_parser:
+ type: humaneval-manager
\ No newline at end of file
diff --git a/dataloader/commongen.py b/dataloader/commongen.py
index e7a5e75f9..6cb41385e 100644
--- a/dataloader/commongen.py
+++ b/dataloader/commongen.py
@@ -5,6 +5,7 @@
@dataloader_registry.register("tasksolving/commongen/gpt-4")
@dataloader_registry.register("tasksolving/commongen/gpt-3.5")
+@dataloader_registry.register("tasksolving/commongen/llama-2-7b-chat-hf")
class CommongenLoader(DataLoader):
def __init__(self, path: str):
super().__init__(path)
diff --git a/requirements.txt b/requirements.txt
index b6b24354b..e93df8d9f 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -17,3 +17,4 @@ colorlog
rapidfuzz
spacy
colorama==0.4.6
+fschat[model_worker,webui]
diff --git a/scripts/run_local_model_server.sh b/scripts/run_local_model_server.sh
new file mode 100644
index 000000000..0d16fb901
--- /dev/null
+++ b/scripts/run_local_model_server.sh
@@ -0,0 +1,11 @@
+:<
Date: Fri, 20 Oct 2023 10:30:55 +0800
Subject: [PATCH 061/101] Update README.md [ci skip]
---
README.md | 40 +++++++++++++++++++++-------------------
1 file changed, 21 insertions(+), 19 deletions(-)
diff --git a/README.md b/README.md
index b3bb5bb76..a3f013a99 100644
--- a/README.md
+++ b/README.md
@@ -18,13 +18,17 @@
-
+
+
+
+
+
**AgentVerse** is designed to facilitate the deployment of multiple LLM-based agents in various applications. AgentVerse primarily provides two frameworks: **task-solving** and **simulation**.
- Task-solving: This framework assembles multiple agents as an automatic multi-agent system ([AgentVerse-Tasksolving](https://arxiv.org/pdf/2308.10848.pdf), [Multi-agent as system](https://arxiv.org/abs/2309.02427)) to collaboratively accomplish the corresponding tasks.
-Applications: software development system, consulting system, etc.
+Applications: software development system, consulting system, etc.
+ 🤖 AgentVerse 🪐
+ Think of it as a universe for LLM agents.
+ It is designed to facilitate the deployment of multiple LLM-based agents in various applications, which
+ primarily provides two frameworks: task-solving and simulation
+ Introduce how to run an example successfully in CLI mode and in GUI mode
+
+
+ The codebase contained facilitates the creation and simulation of multi-agent systems, potentially incorporating language models for agent interactions.
+
+
+
+
+
+
+
Requirements
+
+
This documentation assumes you have the following:
+
+ This section of the README has
+ details on how you can get up and running with installing this project. The steps below were used to install it
+ on Windows 10 using the recommended manually:
+
+ Make sure you have Python >= 3.9
+
Optional: If you want to use AgentVerse with local models such as LLaMA, you need to additionally install some other dependencies:
+ pip install -r requirements_local.txt
+
+
+ Another alternative to install is:
+ pip install -U agentverse
+
+
+
This guide shows how to export environment variables. On windows, the screenshot below gives as example on how it can be done.
+
+
+
+
+
+
+
+
+
Simulations: Graphical User Interface
+
+
+ Simulations can either have a Graphical User Interface (GUI) or a Command Line Interface (CLI), some simulations
+ have both interfaces. Here are some showcases, examples of cli simulations can be found here and GUI simulations can be found here.
+
+ The simulation agent located inside the agents folder which is inside the agentverse folder defines a
+ BaseAgent which is a Python class that serves as the foundation for other agent classes in the AgentVerse
+ framework.
+ The class includes attributes such as name, llm, output_parser, and methods like step, reset, etc.
+ The agent can interact with an environment and has methods for performing steps, asynchronous steps, and
+ memory management.
+ It provides functionality for managing prompts, receivers, spending, and memory-related operations.
+
+ As part of the agent registry module in the AgentVerse framework there is an initialization file that imports
+ classes from various modules, including simulation agents (ConversationAgent, ToolAgent, etc.) and
+ task-solving agents (RoleAssignerAgent, CriticAgent, etc.).
+ The Registry class from agentverse.registry is used to manage and register agents.The file initializes an
+ instance of Registry named AgentRegistry for agent registration.
+ These files shown below collectively contribute to the structure and functionality of the AgentVerse
+ framework, providing a foundation for creating and managing different types of agents in both simulation and
+ task-solving scenarios.
+
+
+
+
+
+
+
+
+
+
+
1. Prisoner's dilema
+
+ This class defines a PrisonerDilemaAgent within the AgentVerse framework.
+ This agent class is designed for simulating interactions in a Prisoner's Dilema scenario.
+ Additionally, subclasses PoliceAgent and PrisonerAgent inherit from PrisonerDilemaAgent, each representing a
+ specific role with additional role-specific information.
+
+
+
+ Imports and Type Checking
+ : The from __future__ import annotations statement is used for postponed evaluation of type annotations.
+ The class also uses type hints, and there's a conditional import based on type checking to avoid circular
+ dependencies.
+ Class Structure: The class PrisonerDilemaAgent inherits from BaseAgent and serves as a base for
+ agents involved in a Prisoner's Dilemma scenario.
+ Methods:
+ step Represents a step in the agent's decision-making process within the environment.
+ astep An asynchronous version of step.
+ _fill_prompt_template A helper method to fill placeholders in the prompt template, including agent
+ name, environment description, role description, and chat history.
+ add_message_to_memory Adds messages to the agent's memory.
+ reset Resets the agent.
+
+ Role-specific Subclasses The PoliceAgent and PrisonerAgent subclasses introduce role-specific behavior
+ and template filling.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Task solving agent - Command Line Interface
+
+
+ The base.py file located inside the task-solving agent folder, defines an abstract base class named BaseAgent within the AgentVerse framework.
+ This class represents the foundation for creating agents in a multi-agent system. Agents are entities capable of interacting with an environment, processing messages, and maintaining memory.
+
Attributes:
+
name (str): A unique identifier for the agent.
+
llm (BaseLLM): An instance of a language model that the agent uses for generating responses.
+
output_parser (OutputParser): An output parser that interprets the responses generated by the agent.
+
prepend_prompt_template (str): Template for the prompt to be prepended before generating a response.
+
append_prompt_template (str): Template for the prompt to be appended before generating a response.
+
prompt_template (str): Combined template for the entire prompt used in generating a response.
+
role_description (str): A description of the role of the agent.
+
memory (BaseMemory): Memory for storing chat history and other relevant information.
+
memory_manipulator (BaseMemoryManipulator): Manipulator for interacting with the agent's memory.
+
max_retry (int): The maximum number of retry attempts if an error occurs during response generation.
+
receiver (Set[str]): A set of identifiers representing the entities that can receive messages from the agent.
+
async_mode (bool): A flag indicating whether the agent operates in asynchronous mode.
+
+
+
+ Abstract Methods:
+
step(self, env_description: str = "") -> Message: Abstract method to get one-step response.
+
astep(self, env_description: str = "") -> Message: Abstract asynchronous version of step.
+
reset(self) -> None: Abstract method to reset the agent.
+
add_message_to_memory(self, messages: List[Message]) -> None: Abstract method to add a message to the agent's memory.
+
+ Methods:
+get_spend(self) -> float: Gets the spending of the agent (associated with language model usage).
+get_spend_formatted(self) -> str: Gets the formatted spending of the agent.
+get_all_prompts(self, **kwargs): Gets both prepend and append prompts along with the total number of tokens.
+get_receiver(self) -> Set[str]: Gets the set of entities that can receive messages from the agent.
+set_receiver(self, receiver: Union[Set[str], str]) -> None: Sets the receiver entities.
+add_receiver(self, receiver: Union[Set[str], str]) -> None: Adds receiver entities.
+remove_receiver(self, receiver: Union[Set[str], str]) -> None: Removes receiver entities.
+
+
This base class provides a blueprint for creating agents in a multi-agent system. It encapsulates common attributes and methods that agents might use to interact with the environment, generate responses, and manage their memory. The abstract methods enforce the implementation of core functionalities in derived agent classes. The class is designed to be extended to create specialized agents tailored for specific tasks or environments within the AgentVerse framework.
+
+
+
+
+
+
+
+
+
+
Simulation environment
+
+ The simulation environments are defined in a class named BasicEnvironment within the AgentVerse framework. This
+ environment, labeled as "sim-basic" in the EnvironmentRegistry, serves as a foundational simulation environment
+ implementing the logic of conversation. It is designed to facilitate interactions between multiple agents based
+ on a set of rules.
+
Purpose:
+
+ This class represents a basic simulation environment within the AgentVerse framework, implementing the logic of
+ conversation between agents.
+ Inheritance:
+
+ Inherits from BaseEnvironment, providing a base class for custom environment implementations.
+ Registration:
+
+ Registered in the EnvironmentRegistry under the name "sim-basic."
+ Attributes:
+
agents: List of agents participating in the environment.
+
rule: Rule defining the logic of agent interactions.
+
max_turns: Maximum number of turns in the environment.
+
cnt_turn: Current turn number in the environment.
+
last_messages: Messages from the last turn.
+
rule_params: Variables set by the rule.
+ Initialization:
+
+ The class initializes with a rule, and additional configurations for order, visibility, selector, updater, and
+ describer aspects of the rule are extracted from the rule configuration.
+ Methods:
+
+ step Asynchronously runs one step of the environment, involving obtaining the next agent index,
+ generating the current environment description, generating the next message from each agent, selecting certain
+ messages based on rules, updating agent memory, updating the set of visible agents, and incrementing the turn
+ count.
+ print_messages Logs the content of selected messages.
+ reset Resets the environment, setting the turn count to 0, resetting the rule, and calling the
+ reset method for each agent.
+ is_done Checks if the environment is done based on the current turn count.
+ This class provides a flexible and extensible foundation for creating various simulation environments within the
+ AgentVerse framework. It encapsulates the common logic for agent interactions, allowing for easy customization
+ and extension based on specific simulation requirements.
+
+ The code snippet below shows an example of it's usage:
+ # Creating an instance of the BasicEnvironment
+
+ basic_env = BasicEnvironment(rule=custom_rule, agents=[agent1, agent2])
+
+ # Running a step in the environment
+
+
+ resulting_messages = await basic_env.step()
+
+
+
+
+
+
+
Pokemon environment
+
+ Pokemon enviroment defines a class named PokemonEnvironment within the AgentVerse framework. This environment,
+ labeled as "pokemon" in the EnvironmentRegistry and is designed for simulating a Pokémon-themed scenario.
+ It involves multiple agents located in different places, and the environment allows for interactions and
+ responses to a player's messages.
+
+ Initialization: The class initializes with information about agents, their initial locations, a rule
+ governing the environment, and additional parameters.
+ Environment Steps: The step method simulates one step of the environment, allowing either routine
+ agent actions or responses to a player's input.
+ Internal Methods
+ _routine_step Simulates routine steps for non-player agents.
+ _respond_to_playerProcesses the player's input and triggers agent responses.
+ update_state Updates the state of agents based on their locations.
+ print_messagesLogs messages for debugging or information.
+ resetResets the environment to its initial state.
+ is_done Checks if the environment simulation is completed.
+ get_test_messages Returns a set of test messages for demonstration.
+
+
+
+
+
+
+
+
Prisoner's dilema
+
+ PrisonerDilemmaEnvironment, which is a specific environment implementation within the AgentVerse framework. This
+ environment is designed for simulating a scenario related to the prisoner's dilemma. The class inherits from
+ BasicEnvironment and is registered in the EnvironmentRegistry with the label "prisoner_dilemma."
+ The purpose of this class is to represent an environment for simulating the prisoner's dilemma, a classic
+ problem in game theory.
+ -It inherits from BasicEnvironment, providing a foundation for custom environment implementations.
+ -It is registered in the EnvironmentRegistry under the label "prisoner_dilemma."
+ -It asynchronously runs one step of the environment simulation. It involves obtaining the next agent index,
+ generating the current environment description, generating the next message from each agent, selecting certain
+ messages based on rules, updating agent memory, updating the set of visible agents, and incrementing the turn
+ count.
+ -It inherits attributes and methods from BasicEnvironment.and it utilizes a specific rule (SimulationRule)
+ for governing agent interactions in the environment.
+ -For asynchronous execution it utilizes asynchronous programming with asyncio.gather to concurrently
+ execute agent steps.
+ This class is registered in the EnvironmentRegistry under the name "prisoner_dilemma," facilitating easy
+ retrieval and instantiation.
+
+
+ The screenshot below shows the location of the Prisoner's dilema's class
+
+
+
+
+
+
+
+
+
+
Software Development Environment Team
+
+ This file defines a class named SdeTeamEnvironment within the AgentVerse framework. This environment class is
+ designed for simulating interactions in a software development environment where a team collaborates to craft
+ code. It extends the BaseEnvironment class and includes specific rules and behaviors for the given context.
+
+ Imports The class includes necessary imports for asynchronous operations, logging, typing, and JSON
+ handling.
+ Class Structure: The class SdeTeamEnvironment is registered in the EnvironmentRegistry as the
+ environment for simulating software development teams.
+ Attributes:
+ agents: List of BaseAgent instances representing team members.
+
rule
- Rule for the environment.
+
max_turns
- Maximum number of turns for the simulation.
+
cnt_turn
- Current turn number.
+
last_messages
- Messages from the last turn.
+
rule_params
- Variables set by the rule.
+
task_name
- A string representing the name of the task or project (default is "test").
+
+ Methods
+ Constructor (__init__)
+ Initializes the environment with a specified rule and other optional parameters. It configures the rule based on
+ provided configurations for order, visibility, selector, updater, and describer.
+
+ step Method
+
+ Purpose: Runs one step of the environment simulation.
+ Steps
+
Gets the next agent index based on the order rule.
+
Generates the next message asynchronously for each agent.
+
Selects certain messages based on the selector rule.
+
Updates memory of the agents using the updater rule.
+
Updates the set of visible agents for each agent.
+
Increments the turn count.
+
Returns the selected messages.
+
+ print_messages Method
+
+ Purpose: Prints the sender and content of messages to the logging system.
+
+ Parameters:
+
messages: List of Message instances.
+
+ Output: Logs the sender and content of each message.
+
+
reset Method:Resets the environment, including the turn count, rule, and agent states.
+
+
Output: Resets the environment state.
+
is_done Method: Checks if the environment is done, either reaching the maximum turns or an end flag.
+
Output: Returns True if the environment is done, False otherwise.
+
Rule Initialization and Configuration:
+ Initializes and configures the rule for order, visibility, selector, updater, and describer based on provided
+ configurations.
+
+ The SdeTeamEnvironment class is specifically designed to model interactions within a software development
+ environment where a team collaborates to craft code. The environment includes rules for order, visibility,
+ selector, updater, and describer, which are tailored to simulate the dynamics of a software development team.
+ The asynchronous step method and other functionalities allow for the simulation of turns, messages, and
+ interactions among team members.
+
+ Tool Loading
+ The environment loads a tool named "code_interpreter" as part of the initialization process. This tool, along
+ with others, is loaded using the load_tools function from agentverse.initialization. The tools are used to
+ interact with specific functionalities related to the software development context.
+
+
+
+
+
+
+
+
+
+
Task solving Enviroment
+
+ TasksolvingRule class defined within the AgentVerse framework represents a rule set for a task-solving environment where agents collaborate to perform tasks.
+ The rule set includes components for role assignment, decision-making, execution, and evaluation. It extends the BaseRule class and is designed to be used in the context of multi-agent systems where agents need to work together to solve complex tasks.
+
Components:
+
Role Assigner (BaseRoleAssigner): Assigns roles to agents based on the task description and advice.
+
Decision Maker (BaseDecisionMaker): Determines the plan or decision for solving the task. It takes into account the task description, previous plans, and advice.
+
Executor (BaseExecutor): Executes the task using the provided final solution from the decision-making stage.
+
Evaluator (BaseEvaluator): Evaluates the solution and execution result to provide a score and advice.
+
+ Attributes:
+
role_assigner: Instance of the role assigner component.
+
decision_maker: Instance of the decision maker component.
+
executor: Instance of the executor component.
+
evaluator: Instance of the evaluator component.
+
role_assign_only_once: Boolean indicating whether role assignment should occur only once.
+
add_execution_result_to_critic: Boolean indicating whether to add execution results to critic agents.
+
add_execution_result_to_solver: Boolean indicating whether to add execution results to the solver agent.
+ Methods:
+__init__ Method
+
+Initializes the rule set with components configured using provided configurations for role assigner, decision maker, executor, and evaluator.
+role_assign Method
+
+Assigns roles to agents based on the task description, advice, and turn count.
+Handles scenarios where role assignment occurs only once.
+decision_making Method
+
+Determines the plan or decision for solving the task.
+Handles dynamic decision-making scenarios.
+Takes into account task description, previous plans, and advice.
+execute Method
+
+Executes the task using the executor component.
+Adds execution results to critic and solver agents based on configuration.
+evaluate Method
+
+Evaluates the solution and execution result using the evaluator component.
+Handles human evaluation scenarios by collecting scores and advice.
+reset Method
+
+Resets the state of the rule set by resetting each component.
+
+
+
+
+The TasksolvingRule class is designed for a task-solving environment where multiple agents collaborate to perform tasks. It integrates components for role assignment, decision-making, execution, and evaluation. The file incorporates dynamic decision-making scenarios, human evaluation, and configuration options for controlling the behavior of the rule set. Below is a screenshot showing codebase for this class.
+
+
+
+
+
+
+
+
+
+
+
+
Contributing
+
+ To contribute or if you are interested in
+ joining the agent verse and a become core AgentVerse team member - don't hesitate to reach out to the leaders,
+ their details are on the repo's README file.
+