David Hamner
VirtWorkForce is a web-based visual workflow editor for Language Learning Models (LLMs), inspired by the concept of ComfyUI but tailored for text-based AI models. It allows users to create, edit, and execute complex LLM workflows visually, providing a powerful and intuitive interface for AI-driven text processing and generation tasks.
While ComfyUI focuses on creating workflows for image generation and manipulation, VirtWorkForce applies a similar node-based approach to LLM operations. This allows users to:
- Chain multiple LLM operations in a visual, intuitive manner
- Mix and match different LLM models within a single workflow
- Incorporate conditional logic and branching in text processing pipelines
- Visualize the flow of text data through various processing steps
- Easily experiment with and fine-tune complex LLM workflows
By providing this visual interface, VirtWorkForce aims to make advanced LLM operations more accessible to both developers and non-technical users, similar to how ComfyUI has done for image generation workflows.
- Visual node-based workflow creation for LLM operations
- Support for different node types (prompt, display, if-else, regular)
- Integration with Ollama for diverse AI model execution
- Real-time workflow execution with node highlighting
- Save and load workflow functionality
- Responsive and intuitive user interface
- Frontend: HTML, CSS, JavaScript
- Backend: Python with Flask
- AI Integration: Ollama
- WebSocket: Flask-SocketIO for real-time updates
- jsPlumb: For creating visual connections between nodes
- Clone the repository:
git clone https://github.com/yourusername/VirtWorkForce.git cd VirtWorkForce
- Create a virtual environment and activate it:
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
- Install the required Python packages:
pip install flask flask-socketio ollama pyyaml
- Ensure Ollama is installed and running on your system. Visit Ollama's official website for installation instructions.
- Download jsPlumb from jsDelivr and place it in the
static/package/dist/js/
directory - Run the Flask application:
python app.py
- Open a web browser and navigate to
https://localhost:5000
- Use the toolbar buttons to add different types of nodes to the canvas
- Drag nodes to position them on the canvas
- Connect nodes by clicking and dragging from output ports to input ports
- Configure nodes by entering prompts, selecting models, or setting conditions
- Save your workflow using the save button
- Load existing workflows using the load dropdown
- Execute the workflow by clicking the play button
- View results in real-time as the workflow executes
VirtWorkForce/
│
├── app.py
├── static/
│ ├── main.js
│ ├── WorkflowEditor.js
│ ├── NodeManager.js
│ ├── ConnectionManager.js
│ ├── WorkflowIO.js
│ ├── ExecutionManager.js
│ ├── styles.css
│ └── package/
│ └── dist/
│ └── js/
│ └── jsplumb.min.js
├── templates/
│ └── index.html
├── workflows/
└── img/
└── example.png
Contributions to improve VirtWorkForce are welcome. Please follow these steps:
- Fork the repository
- Create a new branch (
git checkout -b feature/AmazingFeature
) - Make your changes and commit them (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a pull request
GNU General Public License v3.0
- This project was developed with the assistance of Claude.ai, an AI coding assistant.
- Thanks to the Ollama project for providing the AI model integration capabilities.
- Inspired by the concept of ComfyUI, adapted for LLM workflows.
- jsPlumb library source: jsDelivr - jsPlumb
For any questions or feedback, please open an issue in the GitHub repository or contact David Hamner directly.
If you encounter any issues while setting up or running VirtWorkForce, please check the following:
- Ensure all dependencies are correctly installed
- Verify that Ollama is running and accessible
- Check the console for any JavaScript errors
- Review the Flask server logs for backend errors
If problems persist, please open an issue on the GitHub repository with a detailed description of the problem and steps to reproduce it.