-
Notifications
You must be signed in to change notification settings - Fork 567
Inference on CPU #222
Comments
I'm running a fork of this repository that has cpu compatibility on just the infer script. Unfortunately I have already modified the entire repo beyond the point of it likely being useful because I'm using a non-coco dataset. (it's https://github.com/NuTufts/Detectron.pytorch/tree/cpu_train but I don't recommend trying to use it. The basic changes I had to make was changing all of the .cuda() calls at the end of tensors that pushed them to devices to tensor.to(torch.device(device_id)) This allows you to set the device as CPU. I also had to adjust data_parallel so that it was okay with handling cpu devices. In the above repo I created a datasingular that has the changes. |
Is it possible to run inference on CPU? In the forward function of the roi_Xconv1fc_gn_head_panet in fast_rcnn_heads.py, it relies on the gpu version of the roi align. How can this issue be solved?
The text was updated successfully, but these errors were encountered: