Skip to content

mz0in/self-operating-computer

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Self-Operating Computer Framework

A framework to enable multimodal models to operate a computer.

Using the same inputs and outputs of a human operator, the model views the screen and decides on a series of mouse and keyboard actions to reach an objective.

Key Features

  • Compatibility: Designed for various multimodal models.
  • Integration: Currently integrated with GPT-4v as the default model.
  • Future Plans: Support for additional models.

Current Challenges

Note: GPT-4V's error rate in estimating XY mouse click locations is currently quite high. This framework aims to track the progress of multimodal models over time, aspiring to achieve human-level performance in computer operation.

Ongoing Development

At HyperwriteAI, we are developing a multimodal model with more accurate click location predictions.

Additional Thoughts

We recognize that some operating system functions may be more efficiently executed with hotkeys such as entering the Browser Address bar using command + L rather than by simulating a mouse click at the correct XY location. We plan to make these improvements over time. However, it's important to note that many actions require the accurate selection of visual elements on the screen, necessitating precise XY mouse click locations. A primary focus of this project is to refine the accuracy of determining these click locations. We believe this is essential for achieving a fully self-operating computer in the current technological landscape.

Demo

final-low.mp4

Quick Start Instructions

Below are instructions to set up the Self-Operating Computer Framework locally on your computer.

  1. Clone the repo to a directory on your computer:
git clone https://github.com/OthersideAI/self-operating-computer.git
  1. Cd into directory:
cd self-operating-computer
  1. Create a Python virtual environment. Learn more about Python virtual environment.
python3 -m venv venv
  1. Activate the virtual environment:
source venv/bin/activate
  1. Install the project requirements:
pip install -r requirements.txt
  1. Install Project and Command-Line Interface:
pip install .
  1. Then rename the .example.env file to .env so that you can save your OpenAI key in it.
mv .example.env .env
  1. Add your Open AI key to your new .env file. If you don't have one, you can obtain an OpenAI key here:
OPENAI_API_KEY='your-key-here'
  1. Run it!
operate
  1. Final Step: As a last step, the Terminal app will ask for permission for "Screen Recording" and "Accessibility" in the "Security & Privacy" page of Mac's "System Preferences".

Contributions are Welcomed! Some Ideas:

  • Prompt Improvements: Noticed any areas for prompt improvements? Feel free to make suggestions or submit a pull request (PR).
  • Enabling New Mouse Capabilities (drag, hover, etc.)
  • Adding New Multimodal Models: Integration of new multimodal models is welcomed. If you have a specific model in mind that you believe would be a valuable addition, please feel free to integrate it and submit a PR.
  • Framework Architecture Improvements: Think you can enhance the framework architecture described in the intro? We welcome suggestions and PRs.

For any input on improving this project, feel free to reach out to me on Twitter.

Follow HyperWriteAI for More Updates

Stay updated with the latest developments:

Compatibility

  • This project is only compatible with MacOS at this time.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%