Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different versions of TensorRT get different model inference results #4198

Closed
demuxin opened this issue Oct 14, 2024 · 1 comment
Closed

Comments

@demuxin
Copy link

demuxin commented Oct 14, 2024

Description

I inference the groundingDino model using C++ TensorRT.

For the same model and the same image, TensorRT 8.6 can gets the correct detection boxes.

But when I update TensorRT to 10.4, can't get detection boxes.

Possible model result error caused by TensorRT 10.4, How can I analyze this issue?

By the way, I've tried multiple versions other than 8.6 (eg 9.3, 10.0, 10.1), None of them get detection boxes.

Environment

TensorRT Version: 8.6.1.6 / 10.4.0.26

NVIDIA GPU: GeForce RTX 3090

NVIDIA Driver Version: 535.183.06

CUDA Version: 12.2

Relevant Files

Model link: https://drive.google.com/file/d/1VRHKT7cswtDVXNUUmebbPmBSAOyd-fJN/view?usp=drive_link

@demuxin
Copy link
Author

demuxin commented Oct 16, 2024

I load the save onnx model via C++ TensorRT and print the information for each layer.

TensorRT 8.6 loaded a model with 21060 layers and TensorRT 10.4 loaded a model with 37921 layers, why is the difference in the number of layers so large?

rt86_layers.txt
rt104_layers.txt

@demuxin demuxin closed this as completed Oct 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant