Skip to content
forked from cvat-ai/cvat

Powerful and efficient Computer Vision Annotation Tool (CVAT)

License

Notifications You must be signed in to change notification settings

ingedata-ph/cvat

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CVAT logo

Computer Vision Annotation Tool (CVAT)

CI Gitter chat Discord Coverage Status server pulls ui pulls DOI

CVAT is an interactive video and image annotation tool for computer vision. It is used by tens of thousands of users and companies around the world. Our mission is to help developers, companies, and organizations around the world to solve real problems using the Data-centric AI approach.

Start using CVAT online: cvat.ai. You can use it for free, or subscribe to get unlimited data, organizations, autoannotations, and Roboflow and HuggingFace integration.

Or set CVAT up as a self-hosted solution: Self-hosted Installation Guide. We provide Enterprise support for self-hosted installations with premium features: SSO, LDAP, Roboflow and HuggingFace integrations, and advanced analytics (coming soon). We also do trainings and a dedicated support with 24 hour SLA.

CVAT screencast

Quick start ⚡

Partners ❤️

CVAT is used by teams all over the world. In the list, you can find key companies which help us support the product or an essential part of our ecosystem. If you use us, please drop us a line at [email protected].

  • Human Protocol uses CVAT as a way of adding annotation service to the Human Protocol.
  • FiftyOne is an open-source dataset curation and model analysis tool for visualizing, exploring, and improving computer vision datasets and models that are tightly integrated with CVAT for annotation and label refinement.

Public datasets

ATLANTIS, an open-source dataset for semantic segmentation of waterbody images, developed by iWERS group in the Department of Civil and Environmental Engineering at the University of South Carolina is using CVAT.

For developing a semantic segmentation dataset using CVAT, see:

CVAT online: cvat.ai

This is an online version of CVAT. It's free, efficient, and easy to use.

cvat.ai runs the latest version of the tool. You can create up to 10 tasks there and upload up to 500Mb of data to annotate. It will only be visible to you or the people you assign to it.

For now, it does not have analytics features like management and monitoring the data annotation team. It also does not allow exporting images, just the annotations.

We plan to enhance cvat.ai with new powerful features. Stay tuned!

Prebuilt Docker images 🐳

Prebuilt docker images are the easiest way to start using CVAT locally. They are available on Docker Hub:

The images have been downloaded more than 1M times so far.

Screencasts 🎦

Here are some screencasts showing how to use CVAT.

Computer Vision Annotation Course: we introduce our course series designed to help you annotate data faster and better using CVAT. This course is about CVAT deployment and integrations, it includes presentations and covers the following topics:

  • Speeding up your data annotation process: introduction to CVAT and Datumaro. What problems do CVAT and Datumaro solve, and how they can speed up your model training process. Some resources you can use to learn more about how to use them.
  • Deployment and use CVAT. Use the app online at app.cvat.ai. A local deployment. A containerized local deployment with Docker Compose (for regular use), and a local cluster deployment with Kubernetes (for enterprise users). A 2-minute tour of the interface, a breakdown of CVAT’s internals, and a demonstration of how to deploy CVAT using Docker Compose.

Product tour: in this course, we show how to use CVAT, and help to get familiar with CVAT functionality and interfaces. This course does not cover integrations and is dedicated solely to CVAT. It covers the following topics:

  • Pipeline. In this video, we show how to use app.cvat.ai: how to sign up, upload your data, annotate it, and download it.

For feedback, please see Contact us

API

SDK

CLI

Supported annotation formats

CVAT supports multiple annotation formats. You can select the format after clicking the Upload annotation and Dump annotation buttons. Datumaro dataset framework allows additional dataset transformations with its command line tool and Python library.

For more information about the supported formats, see: Annotation Formats.

Annotation format Import Export
CVAT for images ✔️ ✔️
CVAT for a video ✔️ ✔️
Datumaro ✔️ ✔️
PASCAL VOC ✔️ ✔️
Segmentation masks from PASCAL VOC ✔️ ✔️
YOLO ✔️ ✔️
MS COCO Object Detection ✔️ ✔️
MS COCO Keypoints Detection ✔️ ✔️
TFrecord ✔️ ✔️
MOT ✔️ ✔️
MOTS PNG ✔️ ✔️
LabelMe 3.0 ✔️ ✔️
ImageNet ✔️ ✔️
CamVid ✔️ ✔️
WIDER Face ✔️ ✔️
VGGFace2 ✔️ ✔️
Market-1501 ✔️ ✔️
ICDAR13/15 ✔️ ✔️
Open Images V6 ✔️ ✔️
Cityscapes ✔️ ✔️
KITTI ✔️ ✔️
Kitti Raw Format ✔️ ✔️
LFW ✔️ ✔️
Supervisely Point Cloud Format ✔️ ✔️

Deep learning serverless functions for automatic labeling

CVAT supports automatic labeling. It can speed up the annotation process up to 10x. Here is a list of the algorithms we support, and the platforms they can be run on:

Name Type Framework CPU GPU
Segment Anything interactor PyTorch ✔️ ✔️
Deep Extreme Cut interactor OpenVINO ✔️
Faster RCNN detector OpenVINO ✔️
Mask RCNN detector OpenVINO ✔️
YOLO v3 detector OpenVINO ✔️
YOLO v7 detector ONNX ✔️ ✔️
Object reidentification reid OpenVINO ✔️
Semantic segmentation for ADAS detector OpenVINO ✔️
Text detection v4 detector OpenVINO ✔️
SiamMask tracker PyTorch ✔️ ✔️
TransT tracker PyTorch ✔️ ✔️
f-BRS interactor PyTorch ✔️
HRNet interactor PyTorch ✔️
Inside-Outside Guidance interactor PyTorch ✔️
Faster RCNN detector TensorFlow ✔️ ✔️
Mask RCNN detector TensorFlow ✔️ ✔️
RetinaNet detector PyTorch ✔️ ✔️
Face Detection detector OpenVINO ✔️

License

The code is released under the MIT License.

This software uses LGPL-licensed libraries from the FFmpeg project. The exact steps on how FFmpeg was configured and compiled can be found in the Dockerfile.

FFmpeg is an open-source framework licensed under LGPL and GPL. See https://www.ffmpeg.org/legal.html. You are solely responsible for determining if your use of FFmpeg requires any additional licenses. CVAT.ai Corporation is not responsible for obtaining any such licenses, nor liable for any licensing fees due in connection with your use of FFmpeg.

Contact us

Gitter to ask CVAT usage-related questions. Typically questions get answered fast by the core team or community. There you can also browse other common questions.

Discord is the place to also ask questions or discuss any other stuff related to CVAT.

LinkedIn for the company and work-related questions.

YouTube to see screencast and tutorials about the CVAT.

GitHub issues for feature requests or bug reports. If it's a bug, please add the steps to reproduce it.

#cvat tag on StackOverflow is one more way to ask questions and get our support.

[email protected] to reach out to us if you need commercial support.

Links

About

Powerful and efficient Computer Vision Annotation Tool (CVAT)

Resources

License

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages

  • TypeScript 40.9%
  • Python 39.3%
  • JavaScript 14.1%
  • Mustache 2.6%
  • SCSS 1.6%
  • Open Policy Agent 0.9%
  • Other 0.6%