A framework to enable multimodal models to operate a computer.
Using the same inputs and outputs of a human operator, the model views the screen and decides on a series of mouse and keyboard actions to reach an objective.
- Compatibility: Designed for various multimodal models.
- Integration: Currently integrated with GPT-4v as the default model.
- Future Plans: Support for additional models.
Note: GPT-4V's error rate in estimating XY mouse click locations is currently quite high. This framework aims to track the progress of multimodal models over time, aspiring to achieve human-level performance in computer operation.
At HyperwriteAI, we are developing a multimodal model with more accurate click location predictions.
We recognize that some operating system functions may be more efficiently executed with hotkeys such as entering the Browser Address bar using command + L
rather than by simulating a mouse click at the correct XY location. We plan to make these improvements over time. However, it's important to note that many actions require the accurate selection of visual elements on the screen, necessitating precise XY mouse click locations. A primary focus of this project is to refine the accuracy of determining these click locations. We believe this is essential for achieving a fully self-operating computer in the current technological landscape.
final-low.mp4
Below are instructions to set up the Self-Operating Computer Framework locally on your computer.
- Clone the repo to a directory on your computer:
git clone https://github.com/OthersideAI/self-operating-computer.git
- Cd into directory:
cd self-operating-computer
- Create a Python virtual environment. Learn more about Python virtual environment.
python3 -m venv venv
- Activate the virtual environment:
source venv/bin/activate
- Install the project requirements:
pip install -r requirements.txt
- Install Project and Command-Line Interface:
pip install .
- Then rename the
.example.env
file to.env
so that you can save your OpenAI key in it.
mv .example.env .env
- Add your Open AI key to your new
.env
file. If you don't have one, you can obtain an OpenAI key here:
OPENAI_API_KEY='your-key-here'
- Run it!
operate
- Final Step: As a last step, the Terminal app will ask for permission for "Screen Recording" and "Accessibility" in the "Security & Privacy" page of Mac's "System Preferences".
- Prompt Improvements: Noticed any areas for prompt improvements? Feel free to make suggestions or submit a pull request (PR).
- Enabling New Mouse Capabilities (drag, hover, etc.)
- Adding New Multimodal Models: Integration of new multimodal models is welcomed. If you have a specific model in mind that you believe would be a valuable addition, please feel free to integrate it and submit a PR.
- Framework Architecture Improvements: Think you can enhance the framework architecture described in the intro? We welcome suggestions and PRs.
For any input on improving this project, feel free to reach out to me on Twitter.
Stay updated with the latest developments:
- This project is only compatible with MacOS at this time.