Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a plan to open source the trained weights of MaskRCNN? #8

Closed
hpanwar08 opened this issue Oct 21, 2019 · 13 comments
Closed

Is there a plan to open source the trained weights of MaskRCNN? #8

hpanwar08 opened this issue Oct 21, 2019 · 13 comments

Comments

@hpanwar08
Copy link

Thank you this large dataset. Is there a plan to open source the trained weights and code in keras or pytorch?

@zhxgj
Copy link
Contributor

zhxgj commented Oct 21, 2019

Hi @hpanwar08 Thanks for your interests. Yes we do. We are going to migrate the data to another platform, which also supports sharing models. We are under internal legal assess now. Hopefully we can get approved soon. Please stay tuned :)

@Ramlinbird
Copy link

Hi @hpanwar08 Thanks for your interests. Yes we do. We are going to migrate the data to another platform, which also supports sharing models. We are under internal legal assess now. Hopefully we can get approved soon. Please stay tuned :)

Looking forward to meeting with faster data-getting method and shared-model :)

@Ramlinbird
Copy link

Ramlinbird commented Nov 9, 2019

@zhxgj HI,is the model ready to publish? How much time did you spend on training this model (what's the setting)? Is there any accelerating skills? Now I try to train the model, but find it may take me many days with only 4GPUs. Thanks.

@hpanwar08
Copy link
Author

@zhxgj HI,is the model ready to publish? How much time did you spend on training this model (what's the setting)? Is there any accelerating skills? Now I try to train the model, but find it may take me many days with only 4GPUs. Thanks.

I am training on half of the training data on matterport maskrcnn on 1 GPU, it has been running for days. 1 epoch on half of the training data takes 42 hrs.

@zhxgj
Copy link
Contributor

zhxgj commented Nov 10, 2019

@zhxgj HI,is the model ready to publish? How much time did you spend on training this model (what's the setting)? Is there any accelerating skills? Now I try to train the model, but find it may take me many days with only 4GPUs. Thanks.

It took me about a week to train the models on 8 K80 GPUs. The models are still under legal review and unfortunately I do not have timeline right now. But I think it should be soon. I will follow up with our legal team to accelerate things.

@monuminu
Copy link

monuminu commented Dec 1, 2019

Waiting for the pretrained model weights . Hope we will get it soon :)

@zhxgj
Copy link
Contributor

zhxgj commented Dec 1, 2019

Waiting for the pretrained model weights . Hope we will get it soon :)

Thanks @monuminu I will keep following up with our legal team

@hpanwar08
Copy link
Author

I have trained Mask RCNN using detectron2 on half of the training data (143k images) on a single GPU with resnet101 backbone initialized with coco weights. My results are close to the ones mentioned in the paper.
However model struggles with small objects.
@zhxgj do you have the metrics for small, medium, large object. I am not able to find them in the paper.

image

@zhxgj
Copy link
Contributor

zhxgj commented Dec 3, 2019

I have trained Mask RCNN using detectron2 on half of the training data (143k images) on a single GPU with resnet101 backbone initialized with coco weights. My results are close to the ones mentioned in the paper.
However model struggles with small objects.
@zhxgj do you have the metrics for small, medium, large object. I am not able to find them in the paper.

image

Hi @hpanwar08, I got similar results on small objects. Below are my more detailed validation results.

INFO json_dataset_evaluator.py: 241: ~~~~ Mean and per-category AP @ IoU=[0.50,0.95] ~~~~
INFO json_dataset_evaluator.py: 242: 91.0
INFO json_dataset_evaluator.py: 250: 91.6
INFO json_dataset_evaluator.py: 250: 84.0
INFO json_dataset_evaluator.py: 250: 88.6
INFO json_dataset_evaluator.py: 250: 96.0
INFO json_dataset_evaluator.py: 250: 94.9
INFO json_dataset_evaluator.py: 251: ~~~~ Summary metrics ~~~~
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.910
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.964
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.944
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.344
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.775
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.944
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.523
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.916
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.927
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.399
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.804
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.956
INFO json_dataset_evaluator.py: 218: Wrote json eval results to: /dccstor/ddig/peter/logs/e2e_mask_rcnn_X-101-64x4d-FPN_1x/model_iter179999/test/medline_val/generalized_rcnn/detection_results.pkl
INFO task_evaluation.py:  62: Evaluating bounding boxes is done!
INFO task_evaluation.py: 105: Evaluating segmentations
INFO json_dataset_evaluator.py:  88: Writing segmentation results json to: /dccstor/ddig/peter/logs/e2e_mask_rcnn_X-101-64x4d-FPN_1x/model_iter179999/test/medline_val/generalized_rcnn/segmentations_medline_val_results.json
Loading and preparing results...
DONE (t=3.53s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=51.81s).
Accumulating evaluation results...
DONE (t=4.55s).
INFO json_dataset_evaluator.py: 241: ~~~~ Mean and per-category AP @ IoU=[0.50,0.95] ~~~~
INFO json_dataset_evaluator.py: 242: 86.7
INFO json_dataset_evaluator.py: 250: 88.6
INFO json_dataset_evaluator.py: 250: 73.6
INFO json_dataset_evaluator.py: 250: 80.3
INFO json_dataset_evaluator.py: 250: 95.9
INFO json_dataset_evaluator.py: 250: 94.8
INFO json_dataset_evaluator.py: 251: ~~~~ Summary metrics ~~~~
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.867
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.964
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.934
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.297
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.702
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.915
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.508
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.885
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.895
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.351
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.753
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.935
INFO json_dataset_evaluator.py: 137: Wrote json eval results to: /dccstor/ddig/peter/logs/e2e_mask_rcnn_X-101-64x4d-FPN_1x/model_iter179999/test/medline_val/generalized_rcnn/segmentation_results.pkl
INFO task_evaluation.py:  66: Evaluating segmentations is done!
INFO task_evaluation.py: 181: copypaste: Dataset: medline_val
INFO task_evaluation.py: 183: copypaste: Task: box
INFO task_evaluation.py: 186: copypaste: AP,AP50,AP75,APs,APm,APl
INFO task_evaluation.py: 187: copypaste: 0.9100,0.9635,0.9444,0.3436,0.7750,0.9441
INFO task_evaluation.py: 183: copypaste: Task: mask
INFO task_evaluation.py: 186: copypaste: AP,AP50,AP75,APs,APm,APl
INFO task_evaluation.py: 187: copypaste: 0.8667,0.9636,0.9339,0.2971,0.7024,0.9150

@zhxgj
Copy link
Contributor

zhxgj commented Dec 3, 2019

Pre-trained Faster-RCNN model and Mask-RCNN model are released.

@zhxgj zhxgj closed this as completed Dec 3, 2019
@hpanwar08
Copy link
Author

Thanks @zhxgj

@lorenzospataro
Copy link

Pre-trained Faster-RCNN model and Mask-RCNN model are released.

Any plan to release the weights for detectron2?
Or at least a suggestion about how to use detectron2 with these weights?

Thank you

@zhxgj
Copy link
Contributor

zhxgj commented Dec 3, 2019

Pre-trained Faster-RCNN model and Mask-RCNN model are released.

Any plan to release the weights for detectron2?
Or at least a suggestion about how to use detectron2 with these weights?

Thank you

We plan to re-train the models on PubLayNet using Detectron2. The NVIDIA driver on our GPU cluster is out-dated and does not support Detectron2. We do not have the permission to upgrade the driver, but we are actively seeking a solution at the moment.

To use Detectron model in Detectron2, I saw someone posted a solution here. I have not tested it, but you can have a try.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants