Skip to content
This repository has been archived by the owner on Mar 1, 2024. It is now read-only.
/ r3m Public archive

Pre-training Reusable Representations for Robotic Manipulation Using Diverse Human Video Data

License

Notifications You must be signed in to change notification settings

facebookresearch/r3m

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

R3M: A Universal Visual Representation for Robot Manipulation

This project studies how to learn generalizable visual representation for robotics from videos of humans and natural language. It contains pre-trained representation on the Ego4D dataset trained in the R3M paper

Installation

To install R3M from an existing conda environment, simply run pip install -e . from this directory.

You can alternatively build a fresh conda env from the r3m_base.yaml file here and then install from this directory with pip install -e .

You can test if it has installed correctly by running import r3m from a python shell.

Using the representation

To use the model in your code simply run:

from r3m import load_r3m
r3m = load_r3m("resnet50") # resnet18, resnet34
r3m.eval()

Further example code to use a pre-trained representation is located in the example here.

If you have any issue accessing or downloading R3M please contact Suraj Nair: surajn (at) stanford (dot) edu

Training the representation

To train the representation run:

python train_representation.py hydra/launcher=local hydra/output=local agent.langweight=1.0 agent.size=50 experiment=r3m_test dataset=ego4d doaug=rctraj agent.l1weight=0.00001 batch_size=16 datapath=<PATH TO PARSED Ego4D DATA> wandbuser=<WEIGHTS AND BIASES USER> wandbproject=<WEIGHTS AND BIASES PROJECT>

Note: For fast training, the Ego4D data loading code assumes that the dataset has been parsed into frames, with a folder for each video clip and frames of the videoclip (resized to [224 x 224]) numbered within the directory (for example 000123.jpg). It also assumes a file called manifest.csv which has a row for each clip, with the path to the clip folder, the clip length, and the natural language pairing for the clip.

Evaluating the representation with behavior cloning

See the eval branch here.

License

R3M is licensed under the MIT license.

Ackowledgements

Parts of this code are adapted from the DrQV2 codebase

About

Pre-training Reusable Representations for Robotic Manipulation Using Diverse Human Video Data

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages