Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encounter onnxruntime crash when trying to use tensorrt run quantized onnx model #88

Open
summelon opened this issue May 26, 2023 · 0 comments

Comments

@summelon
Copy link

Hi, thanks for your great work.

I am trying to improve the performance of anylabeling when the GPU and tensorrt backend is available.

I prepared several steps:

  1. Download your vit-b quantized onnx model
  2. Convert the model by using the symbolic_shape_infer.py from the official document
  3. Enable "trt_int8_enable" in TRT executor provider option

But I met following error:

2023-05-26 08:27:12.631758183 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1210 GetCapability] [TensorRT EP] No graph will run on TensorRT execution provider
2023-05-26 08:27:13.136179864 [W:onnxruntime:, session_state.cc:1136 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2023-05-26 08:27:13.136197010 [W:onnxruntime:, session_state.cc:1138 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
(2048, 2048, 3)
2023-05-26 08:27:13.995464756 [E:onnxruntime:Default, cuda_call.cc:119 CudaCall] CUDA failure 1: invalid argument ; GPU=1 ; hostname=vision ; expr=cudaMemcpyAsync(output.MutableDataRaw(), input.DataRaw(), input.Shape().Size() * input.DataType()->Size(), cudaMemcpyDeviceToDevice, stream); 
2023-05-26 08:27:13.995665084 [E:onnxruntime:, sequential_executor.cc:494 ExecuteKernel] Non-zero status code returned while running Einsum node. Name:'/blocks.0/attn/Einsum' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/einsum_utils/einsum_auxiliary_ops.cc:298 std::unique_ptr<onnxruntime::Tensor> onnxruntime::EinsumOp::Transpose(const onnxruntime::Tensor&, const onnxruntime::TensorShape&, const gsl::span<const long unsigned int>&, onnxruntime::AllocatorPtr, void*, const Transpose&) 21Einsum op: Transpose failed: CUDA failure 1: invalid argument ; GPU=1 ; hostname=vision ; expr=cudaMemcpyAsync(output.MutableDataRaw(), input.DataRaw(), input.Shape().Size() * input.DataType()->Size(), cudaMemcpyDeviceToDevice, stream); 

terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
  what():  /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:124 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:117 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDA failure 700: an illegal memory access was encountered ; GPU=1 ; hostname=vision ; expr=cudaEventDestroy(event_); 

I saw you manually filter the TRT executer.
Have you ever met similar issue like this?
Thanks in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant