-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference is failed after loading the engine file #3
Comments
example in readme generate a engine with two outputs. so you need to alloc memory for second output and add it to bindings. |
Hi @traveller59
I snooped the data in array output and output2, the output2 is all zero and the output is as below:
Meanwhile, I randomized the same shape input by torch.rand().cuda() and input it into the pytorch model, and the output tensor is as below:
Seems my conversion configuration is incorrect, would you give some advice? Below is my conversion script:
|
you need to convert image to np.float32. |
Hi, I tried your newest inference code and get the output results.
But got a different result compared to the tensorrt way. Would you leave some comments about how to address this issue? |
Hi,
Here is my environment setting:
After installed the dependencies of this repo.
I tried the test.py, and serialized the engine into file "test.engine"
Then I try to load it and execute the inference as normal tensorrt python code, as below:
Then I get below error:
Any advice will be welcome.
Thanks,
The text was updated successfully, but these errors were encountered: