Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preprocessing Frames and Inference on L40 GPU(runpod.io) is taking too much time. #336

Open
tariksetia opened this issue Jun 12, 2024 · 0 comments

Comments

@tariksetia
Copy link

Consider the following note book: https://github.com/tariksetia/cv-benchmark/blob/main/03-grounding-dino-gh.ipynb

If I run this on MacBook or v100 (pagemaker), the preprocessing of 283 images, take no time. But on L40 GPU(runpod) it takes 7 minutes. Don't know why?

The same difference is observed in model inference...ie one V100 during inference of 283 images with batch size of one , GPU utilisation is quite high reaching upto 100%. This leads to fps of 5 on 283 images.

But on L40, GPU utilisation never went more than 30. Most of the time it remained 1 %

The same notebook was run everywhere. and same behaviour is observed for GDino on huggingface.

Has anyone observed this earlier? Am I doing something wrong here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant