Skip to content

Text2Action demo for automating browser interaction.

License

Notifications You must be signed in to change notification settings

kustomzone/LaVague

 
 

Repository files navigation

Stargazers Issues Forks Contributors


LaVague Logo

Welcome to LaVague

A Large Action Model framework for developing AI Web Agents

🏄‍♀️ What is LaVague?

LaVague is an open-source Large Action Model framework to develop AI Web Agents.

Our web agents take an objective, such as "Print installation steps for Hugging Face's Diffusers library" and performs the required actions to achieve this goal by leveraging our two core components:

  • A World Model that takes an objective and the current state (aka the current web page) and turns that into instructions
  • An Action Engine which “compiles” these instructions into action code, e.g. Selenium or Playwright & execute them

🚀 Getting Started

Demo

Here is an example of how LaVague can take multiple steps to achieve the objective of "Go on the quicktour of PEFT":

Demo for agent

Hands-on

You can do this with the following steps:

  1. Download LaVague with:
pip install lavague
  1. Use our framework to build a Web Agent and implement the objective:
from lavague.core import WebAgent, WorldModel, ActionEngine
from lavague.drivers.selenium import SeleniumDriver

selenium_driver = SeleniumDriver()
world_model = WorldModel.from_hub("hf_example")
action_engine = ActionEngine(selenium_driver)
agent = WebAgent(action_engine, world_model)
agent.get("https://huggingface.co/docs")
agent.run("Go on the quicktour of PEFT")

For more information on this example and how to use LaVague, see our quick-tour.

Note, these examples use our default OpenAI API configuration and you will need to set the OPENAI_API_KEY variable in your local environment with a valid API key for these to work.

For an end-to-end example of LaVague in a Google Colab, see our quick-tour notebook

🙋 Contributing

We would love your help and support on our quest to build a robust and reliable Large Action Model for web automation.

To avoid having multiple people working on the same things & being unable to merge your work, we have outlined the following contribution process:

  1. 📢 We outline tasks on our backlog: we recommend you check out issues with the help-wanted labels & good first issue labels
  2. 🙋‍♀️ If you are interested in working on one of these tasks, comment on the issue!
  3. 🤝 We will discuss with you and assign you the task with a community assigned label
  4. 💬 We will then be available to discuss this task with you
  5. ⬆️ You should submit your work as a PR
  6. ✅ We will review & merge your code or request changes/give feedback

Please check out our contributing guide for a more detailed guide.

If you want to ask questions, contribute, or have proposals, please come on our Discord to chat!

🗺️ Roadmap

TO keep up to date with our project backlog here.

🚨 Security warning

Note, this project executes LLM-generated code using exec. This is not considered a safe practice. We therefore recommend taking extra care when using LaVague and running LaVague in a sandboxed environment!

📈 Data collection

We want to build a dataset that can be used by the AI community to build better Large Action Models for better Web Agents. You can see our work so far on building community datasets on our BigAction HuggingFace page.

This is why LaVague collects the following user data telemetry by default:

  • Version of LaVague installed
  • Code generated for each web action step
  • LLM used (i.e GPT4)
  • Multi modal LLM used (i.e GPT4)
  • Randomly generated anonymous user ID
  • Whether you are using a CLI command or our library directly
  • The instruction used/generated
  • The objective used (if you are using the agent)
  • The chain of thoughts (if you are using the agent)
  • The interaction zone on the page (bounding box)
  • The viewport size of your browser
  • The URL you performed an action on
  • Whether the action failed or succeeded
  • Error message, where relevant
  • The source nodes (chunks of HTML code retrieved from the web page to perform this action)

🚫 Turn off all telemetry

If you want to turn off all telemetry, you can set the TELEMETRY_VAR environment variable to "NONE".

If you are running LaVague locally in a Linux environment, you can persistently set this variable for your environment with the following steps:

  1. Add TELEMETRY_VAR="NONE" to your ~/.bashrc, ~/.bash_profile, or ~/.profile file (which file you have depends on your shell and its configuration)
  2. Use `source ~/.bashrc (or .bash_profile or .profile) to apply your modifications without having to log out and back in

In a notebook cell, you can use:

import os
os.environ['TELEMETRY_VAR'] = "NONE"

About

Text2Action demo for automating browser interaction.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%