0
点赞
收藏
分享

微信扫一扫

ValueError: This ORT build has [‘TensorrtExecutionProvider‘, ‘CUDAExecutionProvider‘, ‘CPUExecutionP

自由情感小屋 2022-02-12 阅读 473

在使用onnxruntime GPU版本做推理测试时,出现了如下的错误:

Traceback (most recent call last):
  File "D:\cv\ConNext_demo\testonnx.py", line 57, in <module>
    rnet1 = ONNXModel(onnx_model_path)
  File "D:\cv\ConNext_demo\models\onnx.py", line 7, in __init__
    self.onnx_session = onnxruntime.InferenceSession(onnx_path)
  File "D:\ProgramData\Anaconda3\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "D:\ProgramData\Anaconda3\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 361, in _create_inference_session
    raise ValueError("This ORT build has {} enabled. ".format(available_providers) +
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

解决办法:

onnxruntime.InferenceSession(onnx_path)

改为:

 onnxruntime.InferenceSession(onnx_path,providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'])
举报

相关推荐

0 条评论