Skip to content

Commit

Permalink
Autogenstudio - Add GroupChat Support to UI (microsoft#1352)
Browse files Browse the repository at this point in the history
* support groupchat, other QOL fixes

* remove gallery success toast

* Update website/blog/2023-12-01-AutoGenStudio/index.mdx

Co-authored-by: Chi Wang <[email protected]>

---------

Co-authored-by: Chi Wang <[email protected]>
  • Loading branch information
2 people authored and mtwalther committed Jan 26, 2024
1 parent 9b8a331 commit c37026a
Show file tree
Hide file tree
Showing 35 changed files with 2,147 additions and 298 deletions.
2 changes: 2 additions & 0 deletions samples/apps/autogen-studio/MANIFEST.in
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
recursive-include autogenstudio/web/ui *
recursive-include autogenstudio/web/database.sqlite
recursive-exclude notebooks *

recursive-exclude frontend *
recursive-exclude docs *
recursive-exclude tests *
18 changes: 18 additions & 0 deletions samples/apps/autogen-studio/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,8 @@ Project Structure:

### Installation

There are two ways to install AutoGen Studio - from PyPi or from source. We **recommend installing from PyPi** unless you plan to modify the source code.

1. **Install from PyPi**

We recommend using a virtual environment (e.g., conda) to avoid conflicts with existing Python packages. With Python 3.10 or newer active in your virtual environment, use pip to install AutoGen Studio:
Expand Down Expand Up @@ -108,6 +110,18 @@ The agent workflow responds by _writing and executing code_ to create a python p
<!-- ![ARA](./docs/ara_console.png) -->
## Contribution Guide
We welcome contributions to AutoGen Studio. We recommend the following general steps to contribute to the project:
- Review the overall AutoGen project [contribution guide](https://github.com/microsoft/autogen?tab=readme-ov-file#contributing)
- Please review the AutoGen Studio [roadmap](https://github.com/microsoft/autogen/issues/737) to get a sense of the current priorities for the project. Help is appreciated especially with Studio issues tagged with `help-wanted`
- Please initiate a discussion on the roadmap issue or a new issue to discuss your proposed contribution.
- Please review the autogenstudio dev branch here [dev branch](https://github.com/microsoft/autogen/tree/autogenstudio) and use as a base for your contribution. This way, your contribution will be aligned with the latest changes in the AutoGen Studio project.
- Submit a pull request with your contribution!
- If you are modifying AutoGen Studio, it has its own devcontainer. See instructions in `.devcontainer/README.md` to use it
- Please use the tag `studio` for any issues, questions, and PRs related to Studio
## FAQ
**Q: Where can I adjust the default skills, agent and workflow configurations?**
Expand All @@ -119,6 +133,10 @@ A: To reset your conversation history, you can delete the `database.sqlite` file
**Q: Is it possible to view the output and messages generated by the agents during interactions?**
A: Yes, you can view the generated messages in the debug console of the web UI, providing insights into the agent interactions. Alternatively, you can inspect the `database.sqlite` file for a comprehensive record of messages.
**Q: Can I use other models with AutoGen Studio?**
Yes. AutoGen standardizes on the openai model api format, and you can use any api server that offers an openai compliant endpoint. In the AutoGen Studio UI, each agent has an `llm_config` field where you can input your model endpoint details including `model name`, `api key`, `base url`, `model type` and `api version`. For Azure OpenAI models, you can find these details in the Azure portal. Note that for Azure OpenAI, the `model name` is the deployment id or engine, and the `model type` is "azure".
For other OSS models, we recommend using a server such as vllm to instantiate an openai compliant endpoint.
## Acknowledgements
AutoGen Studio is Based on the [AutoGen](https://microsoft.github.io/autogen) project. It was adapted from a research prototype built in October 2023 (original credits: Gagan Bansal, Adam Fourney, Victor Dibia, Piali Choudhury, Saleema Amershi, Ahmed Awadallah, Chi Wang).
1 change: 0 additions & 1 deletion samples/apps/autogen-studio/autogenstudio/chatmanager.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ def chat(self, message: Message, history: List, flow_config: AgentWorkFlowConfig
if flow_config is None:
flow_config = get_default_agent_config(scratch_dir)

# print("Flow config: ", flow_config)
flow = AutoGenWorkFlowManager(config=flow_config, history=history, work_dir=scratch_dir)
message_text = message.content.strip()

Expand Down
18 changes: 15 additions & 3 deletions samples/apps/autogen-studio/autogenstudio/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
import uvicorn

from .version import VERSION
from .utils.dbutils import DBManager

app = typer.Typer()

Expand All @@ -15,12 +16,23 @@ def ui(
workers: int = 1,
reload: Annotated[bool, typer.Option("--reload")] = False,
docs: bool = False,
appdir: str = None,
):
"""
Launch the AutoGen Studio UI CLI .Pass in parameters host, port, workers, and reload to override the default values.
Run the AutoGen Studio UI.
Args:
host (str, optional): Host to run the UI on. Defaults to 127.0.0.1 (localhost).
port (int, optional): Port to run the UI on. Defaults to 8081.
workers (int, optional): Number of workers to run the UI with. Defaults to 1.
reload (bool, optional): Whether to reload the UI on code changes. Defaults to False.
docs (bool, optional): Whether to generate API docs. Defaults to False.
appdir (str, optional): Path to the AutoGen Studio app directory. Defaults to None.
"""

os.environ["AUTOGENUI_API_DOCS"] = str(docs)
os.environ["AUTOGENSTUDIO_API_DOCS"] = str(docs)
if appdir:
os.environ["AUTOGENSTUDIO_APPDIR"] = appdir

uvicorn.run(
"autogenstudio.web.app:app",
Expand All @@ -37,7 +49,7 @@ def version():
Print the version of the AutoGen Studio UI CLI.
"""

typer.echo(f"AutoGen Studio UI CLI version: {VERSION}")
typer.echo(f"AutoGen Studio CLI version: {VERSION}")


def run():
Expand Down
95 changes: 85 additions & 10 deletions samples/apps/autogen-studio/autogenstudio/datamodel.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,25 +58,46 @@ def dict(self):

# autogenflow data models
@dataclass
class ModelConfig:
class Model:
"""Data model for Model Config item in LLMConfig for AutoGen"""

model: str
api_key: Optional[str] = None
base_url: Optional[str] = None
api_type: Optional[str] = None
api_version: Optional[str] = None
id: Optional[str] = None
timestamp: Optional[str] = None
user_id: Optional[str] = None
description: Optional[str] = None

def dict(self):
result = asdict(self)
return result

def __post_init__(self):
if self.id is None:
self.id = str(uuid.uuid4())
if self.timestamp is None:
self.timestamp = datetime.now().isoformat()
if self.user_id is None:
self.user_id = "default"


@dataclass
class LLMConfig:
"""Data model for LLM Config for AutoGen"""

config_list: List[Any] = field(default_factory=List)
config_list: List[Any] = field(default_factory=list)
temperature: float = 0
cache_seed: Optional[Union[int, None]] = None
timeout: Optional[int] = None

def dict(self):
result = asdict(self)
result["config_list"] = [c.dict() for c in self.config_list]
return result


@dataclass
class AgentConfig:
Expand All @@ -101,8 +122,8 @@ def dict(self):
class AgentFlowSpec:
"""Data model to help flow load agents from config"""

type: Literal["assistant", "userproxy", "groupchat"]
config: AgentConfig = field(default_factory=AgentConfig)
type: Literal["assistant", "userproxy"]
config: AgentConfig
id: Optional[str] = None
timestamp: Optional[str] = None
user_id: Optional[str] = None
Expand All @@ -122,24 +143,80 @@ def dict(self):
return result


@dataclass
class GroupChatConfig:
"""Data model for GroupChat Config for AutoGen"""

agents: List[AgentFlowSpec] = field(default_factory=list)
admin_name: str = "Admin"
messages: List[Dict] = field(default_factory=list)
max_round: Optional[int] = 10
admin_name: Optional[str] = "Admin"
speaker_selection_method: Optional[str] = "auto"
allow_repeat_speaker: Optional[Union[bool, List[AgentConfig]]] = True

def dict(self):
result = asdict(self)
result["agents"] = [a.dict() for a in self.agents]
return result


@dataclass
class GroupChatFlowSpec:
"""Data model to help flow load agents from config"""

type: Literal["groupchat"]
config: AgentConfig = field(default_factory=AgentConfig)
groupchat_config: Optional[GroupChatConfig] = field(default_factory=GroupChatConfig)
id: Optional[str] = None
timestamp: Optional[str] = None
user_id: Optional[str] = None
description: Optional[str] = None

def __post_init__(self):
if self.timestamp is None:
self.timestamp = datetime.now().isoformat()
if self.id is None:
self.id = str(uuid.uuid4())
if self.user_id is None:
self.user_id = "default"

def dict(self):
result = asdict(self)
# result["config"] = self.config.dict()
# result["groupchat_config"] = self.groupchat_config.dict()
return result


@dataclass
class AgentWorkFlowConfig:
"""Data model for Flow Config for AutoGen"""

name: str
description: str
sender: AgentFlowSpec
receiver: Union[AgentFlowSpec, List[AgentFlowSpec]]
type: Literal["default", "groupchat"] = "default"
receiver: Union[AgentFlowSpec, GroupChatFlowSpec]
type: Literal["twoagents", "groupchat"] = "twoagents"
id: Optional[str] = None
user_id: Optional[str] = None
timestamp: Optional[str] = None
# how the agent message summary is generated. last: only last message is used, none: no summary, llm: use llm to generate summary
summary_method: Optional[Literal["last", "none", "llm"]] = "last"

def init_spec(self, spec: Dict):
"""initialize the agent spec"""
if not isinstance(spec, dict):
spec = spec.dict()
if spec["type"] == "groupchat":
return GroupChatFlowSpec(**spec)
else:
return AgentFlowSpec(**spec)

def __post_init__(self):
if self.id is None:
self.id = str(uuid.uuid4())
self.sender = self.init_spec(self.sender)
self.receiver = self.init_spec(self.receiver)
if self.user_id is None:
self.user_id = "default"
if self.timestamp is None:
Expand All @@ -148,10 +225,7 @@ def __post_init__(self):
def dict(self):
result = asdict(self)
result["sender"] = self.sender.dict()
if isinstance(self.receiver, list):
result["receiver"] = [r.dict() for r in self.receiver]
else:
result["receiver"] = self.receiver.dict()
result["receiver"] = self.receiver.dict()
return result


Expand Down Expand Up @@ -221,3 +295,4 @@ class DBWebRequestModel(object):
tags: Optional[List[str]] = None
agent: Optional[AgentFlowSpec] = None
workflow: Optional[AgentWorkFlowConfig] = None
model: Optional[Model] = None
Loading

0 comments on commit c37026a

Please sign in to comment.