Skip to content

Simple static web-based mask drawer, supporting semantic segmentation with Segment Anything Model (SAM).

Notifications You must be signed in to change notification settings

summelon/SegDrawer

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SegDrawer

Simple static web-based mask drawer, supporting semantic drawing with Segment Anything Model (SAM) and Video Segmentation Propagation with XMem.

  • Video Segmentation with XMem
original video
first frame
segmentation
VideoSeg

Tools

From top to bottom

  • Clear image
  • Drawer
  • SAM point-segmenter (Need backend)
  • SAM rect-segmenter (Need backend)
  • SAM Seg-Everything (Need backend)
  • Undo
  • Eraser
  • Download
  • VideoSeg (Need backend)

After Seg-Everything, the downloaded files would include .zip file, which contains all cut-offs.

For Video Segmentation, according to XMem, an initial segmentation map is needed, which can be easily achieved with SAM. You can upload a video just as uploading an image, then draw a segmentation on it, after which you can click the final button of VideoSeg to upload it to the server and wait for the automatic download of video seg result.

Note: you may not want to draw the segmentation map manually with the tool Drawer (Same problem holds for Eraser), which leads to non-single color paints especially on the edge as shown below. This is not good for XMem video segmentation. For more details please refer to the original paper.

Run Locally

If don't need SAM for segmentation, just open segDrawer.html and use tools except SAM segmenter.

If use SAM segmenter, do following steps (CPU can be time-consuming)

wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_l_0b3195.pth
wget -P ./XMem/saves/ https://github.com/hkchengrex/XMem/releases/download/v1.0/XMem.pth
  • Launch backend
python server.py
  • Go to Browser
https://127.0.0.1:8000

For configuring CPU/GPU and model, just change the code in server.py

sam_checkpoint = "sam_vit_l_0b3195.pth" # "sam_vit_l_0b3195.pth" or "sam_vit_h_4b8939.pth"
model_type = "vit_l" # "vit_l" or "vit_h"
device = "cuda" # "cuda" if torch.cuda.is_available() else "cpu"

Run on Colab

Follow this Colab example, or run on Colab. Need to register an ngrok account and copy your token to replace "{your_token}".

About

Simple static web-based mask drawer, supporting semantic segmentation with Segment Anything Model (SAM).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 86.0%
  • HTML 8.5%
  • Jupyter Notebook 2.1%
  • Cuda 1.7%
  • C++ 1.3%
  • Cython 0.4%