-
-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Load Custom Model #39
Comments
This is a feature that I'll try to add as soon as I can, however this is already possible doing some little changes in the code. You must follow the next steps: 1) Export the model using YOLOv5 or YOLOv8 repository:For YOLOv5:
For YOLOv8:
It is important here to set 2) Put
|
I followed the instructions and loaded my model successfully,it works very well,Thanks!
But we can go further,We can design an interface to guide users to load their own models and then automatically generate corresponding yamls.
In addition, I have an idea about the integrated model training process. Usually we train the model on a remote server. Take yolov5 for example, its training process is relatively fixed. We can package the labeled datasets and upload them to the server as ftp. You can then use SSH to connect to the remote server and execute a highly templated training command (usually only specifying a dataset, epoch, train imgsz) to train the model.
啊这
***@***.***
南昌大学
…------------------ 原始邮件 ------------------
发件人: "vietanhdev/anylabeling" ***@***.***>;
发送时间: 2023年4月21日(星期五) 晚上11:02
***@***.***>;
***@***.******@***.***>;
主题: Re: [vietanhdev/anylabeling] Load Custom Model (Issue #39)
This is a feature that I'll try to add as soon as I can, however this is already possible doing some little changes in the code. You must follow the next steps:
1) Export the model using YOLOv5 or YOLOv8 repository:
For YOLOv5:
python export.py --weights yourmodel.pt --include onnx --opset 12
For YOLOv8:
yolo export model=yourmodel.pt format=onnx opset=12
It is important here to set opset=12
2) Put yourmodel.onnx in the correct path
Here is an example for a path in Ubuntu. Go to: /home/youruser/anylabeling_data/, and create a folder yourmodel_name and copy there yourmodel.onnx
3) Create a yourmodel.yaml file of your model
Go to anylabeling/anylabeling/configs/auto_labeling and change the default .yaml YOLOv5 or YOLOv8 model according to your case, here is an example for YOLOv5 model:
type: yolov5 name: yourmodel_name display_name: yourmodel_name model_path: https://github.com/vietanhdev/anylabeling-assets/releases/download/v0.0.1/yolov5l.onnx input_width: 640 input_height: 640 score_threshold: 0.5 nms_threshold: 0.45 confidence_threshold: 0.45 classes: - your_class1 - your_class2 - ...
The value model_path in your .yaml file doesn't matter because you already copied in the download folder (anylabeling_data).
4) Change model.yaml file so yourmodel_name will be listed:
Go to anylabeling/anylabeling/configs/auto_labeling and open models.yaml file and add your model at the end of the file:
... - model_name: "yourmodel_name" config_file: "yourmodel.yaml"
It is important here that you keep in mind the same values for yourmodel_name and yourmodel.yaml
Let me know if you need some extra help.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Here, As discussed with @vietanhdev we are trying to load the Orginal SAM VIT H model without (Quant version), I tried the easilest way of by just replacing the encoder.onxx, and decoder.onxx in anylabeling_data/. However, due to the SAM VIT-H model has also a additional Encoder Bin file which is 2.5 G. encoder-data.bin so the load is failed, it still trying to download the Decoder and Encoder file of SAM VIT-H Quant version. |
From AnyLabeling v0.2.22, to load custom models:
|
@vietanhdev Can I ask a question? I can successfully load my models but I want to add Group ID. how should I do?? |
Did you successfully load the model in the end? I have currently downloaded the latest version of the annotation tool, but I am not sure how to integrate SAM open-source vit_ h. Can you help me convert the PTH model into a tool loadable ONNX model? |
@KroitAax Check this code for converting and loading model separately: https://github.com/vietanhdev/samexporter |
hi,I encountered a problem. I did not follow the steps answered in this issue to start importing my model,because this repo has updated now. But,After the model was imported, the semi-automatic labeling effect was very poor. Can you give me some advice?thank you |
Thanks for your great tool AnyLabeling. RectLabel is an offline image annotation tool for object detection and segmentation. |
Sorry, but I think you should avoid advertising in this platform. Please, let's keep this repo clean of SPAM. |
We are sorry that we made you uncomfortable. We are not going to be a spam. As a developer of image annotation tools, we can support around loading YOLOv5/v8/SAM models and processing images. Our main purpose is not advertising our product. |
Hello @hdnh2006 I have convert the custom model to onnx and add it in the But in the
Added and error below in terminal:
Thanks for your help. |
I have a yolov5 model which trained in a custom dataset,I want to load it in the type of torchscript and label the rest of dataset.It seems that only standard yolov5/v8/SAM model can be loaded now
The text was updated successfully, but these errors were encountered: