-
Notifications
You must be signed in to change notification settings - Fork 678
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specific format of annotation #60
Comments
Where:
For example for
https://github.com/AlexeyAB/darknet#how-to-train-to-detect-your-custom-objects
|
Thank you @AlexeyAB . This is very helpful. |
Hi @AlexeyAB , I generated annotations by a script. But when I try to test my model. I cannot see a bounded box on my test data. Is it because I didn't use the tool to generate bounded box for my training data? |
I only have one class and I trained for 4000 iterations. I used the command line to test.
I only got "E:\darknet\build\darknet\x64\data\obj\11.jpg: Predicted in 0.060051 seconds." without any bounded boxes and coordinates of objects. Thank you . |
Alright, I think the problem is the model is not predicting. That's weird. I trained for 4000 iterations and got 0.7 loss. I'm not getting results even when I test on training data. |
|
Thank you for your reply. I think I found the bug. I labelled my object wrong somehow which leads to useless traning. |
But does Yolo_mark resize the image before it does the mark? Edited: My bad. It doesn't. I reversed x and y and it works! Thanks a lot |
hello, i have annotation with three valuers (example 156 111 111) and i i don't understand how to convert annotation with three valuers to yolo format |
@koutini i have the same problem |
@koutini please help me to resolve it |
@sarratouil thank you for the support |
@AlexeyAB @ycui123 @ido-ran @RRMoelker how can i convert to format yolo this format of annotations |
@koutini @sarratouil Hi,
|
@AlexeyAB i use this dataset to train to detect iris |
@AlexeyAB please help me to resolve it |
@AlexeyAB thank you so much for your answer , |
@AlexeyAB may be 1st ,2nd values are the coordinates of the centre of square |
@sarratouil I told you |
@AlexeyAB also may be 1st ,2nd values are coordinates of the first point when the writer click in the image and 3rd is width and height |
I just don't see the link to IRIS images.
So you should write your code on Bash/Python/C... for converting Just for example, you can look at this script - how to read CSV files on bash: https://github.com/AlexeyAB/darknet/blob/master/scripts/windows/otb_get_labels.sh |
@AlexeyAB |
@AlexeyAB you have any idea how i can convert anymy yolo-tiny-obj that i trained it to TensorFlow to use on Android platforms |
Also you can look at these repos: |
dear @AlexeyAB hello , |
@koutini Hi, What do you mean? |
@wasp-codes |
HELLO, if you interest in this area i can send you sample from the annotation file.txt and image to show how can i read the images and draw the bounding box for object in the img.
thanks in advance majeedi
بتاريخ الثلاثاء، 19 أيار 2020 08:35:39 م غرينتش+8، Cindy Weng <[email protected]> كتب:
Hey, might be a silly question but where do I put all the bounding boxes txt files? We mentioned the location of images but we did not mention the location of annotations to darknet did we? thanks
in the same folder as the .jpgs
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
Hello, The below I got from LabelImg tool in Yolo format. Please help how to get a relation between the detected box values and what yolo uses for training. |
Hello, Thank you for the instructions. May I confirm? I am putting my custom data into darknet/build/darknet/x64/data/ |
Are you able to get the formula? I am also having the same issue. {'class_id': 0, 'width': 20, 'top': 387, 'height': 74, 'left': 789}, {'class_id': 1, 'width': 25, 'top': 348, 'height': 31, 'left': 805}, {'class_id': 2, 'width': 19, 'top': 447, 'height': 26, 'left': 826}, {'class_id': 4, 'width': 47, 'top': 545, 'height': 33, 'left': 727}, {'class_id': 3, 'width': 32, 'top': 364, 'height': 144, 'left': 896}, {'class_id': 5, 'width': 89, 'top': 246, 'height': 97, 'left': 825}, {'class_id': 7, 'width': 254, 'top': 224, 'height': 388, 'left': 725} 'image_size': [{'width': 1040, 'depth': 3, 'height': 780}]} |
@sarratouil Could you please share your code? |
i am also facing the same issue |
@mk-hasan Yolo takes in this format |
|
Hi! |
.txt-file for each .jpg-image-file - in the same directory and with the same name, but with .txt-extension, and put to file: object number and object coordinates on this image, for each object in new line: Where: - integer number of object from 0 to (classes-1) atention: - are center of rectangle (are not top-left corner) |
This old thread seems to come up a lot. I've added an entry to the FAQ with an example showing exactly how the numbers all fit together. See this: https://www.ccoderun.ca/programming/darknet_faq/#darknet_annotations |
Is it a problem if somehow some values, after calculate to yolo format, look like this: 0 0.67890625 1.287037037037037 0.05364583333333333 0.2555555555555556 the y axis is above 1. Is that alright? |
If you're asking, I'm guessing you already know it is a problem. The values are normalized 0...1 so it is impossible to get a value > 1. And since that is the middle coordinate and not an edge of the rectangle, it should be impossible to get exactly 1.0 as well. I'm not certain what that would do to Darknet during training, but it cannot be good. I wouldn't be surprised if it causes Darknet to crash as it attempts to create a RoI from the image outside of the image boundary. |
You have to follow a formula |
The format is described here in details: https://www.ccoderun.ca/programming/darknet_faq/#darknet_annotations |
yes. i have tried this. and this is working fine. x,y,w,h = detected_coordinates # its corrdinates of a bounding box of an object of image |
|
Does it mean that x_ mid = (left_x + width + left_x)/2 and y_mid = (top_y + height + top_y)/2 . |
@Riankk123 |
Could you please tell me the format of annotation? I generated my own dataset and I want to train it. I know exactly where my objects are in the image. In that case, I don't want to manually generate annotation. I can write some code to do it for me if I know the format of annotation for YOLO v3. Thank you
The text was updated successfully, but these errors were encountered: