Replies: 1 comment 2 replies
-
@Jason1-2 Yes, the "results" variable in the code snippet results = model(image) would contain the output of the YOLOv8 model's prediction on the input image. The output would be in the form of a tensor or list containing information about the detected objects, such as their class labels, confidence scores, and bounding box coordinates. You can use this output to further process the detections and display them in a way that suits your needs. For example, you could filter out low-confidence detections, draw bounding boxes around the detected objects, and display the class labels next to the boxes. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am very new to python and ultralytics, but so far I trained my raspberry pi camera to detect bottles, cups, and cans for segmentation. I'm using YOLOv8 and Thonny for python, so far my code only has three lines and it's able to identify objects perfectly.
Here are my lines of code:
from ultralytics import YOLO
model=YOLO('best.pt')
model.predict(source=0,show=True)
My camera can successfully detect objects on a live camera. I can see frames are getting printed on the bottom of the code saying
0: 480x640 1 Bottle, 2330.8ms
"0:" is the name of the camera, "480x640" is the size, "1 Bottle" is what is getting identified, and "2330.8ms" is the time the frame was shown.
I got help and was told that I can output results and handle them however I want. An example I was given was:
results = model(image)
In conclusion, my question is if the "results" would be considered the feedback of what the camera is detecting? Which it would be detecting "bottle", "Can" or "Cup". Or would that be "image" in that code.
If anyone has examples or videos for me to follow I would appreciate it if you shared them to me.
Thank you!!!
Beta Was this translation helpful? Give feedback.
All reactions