Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WinMLRunner.exe. Evaluating ..[FAILED] #379

Open
yahuuu opened this issue Mar 19, 2021 · 4 comments
Open

WinMLRunner.exe. Evaluating ..[FAILED] #379

yahuuu opened this issue Mar 19, 2021 · 4 comments

Comments

@yahuuu
Copy link

yahuuu commented Mar 19, 2021

I convert the pt model to the onnx model and verify that the onnx model has good output accuracy. But my onnx model cannot evaluateing with GPU.
I only saved the weights in the model.

error like:
Binding (device = CPU, iteration = 9, inputBinding = CPU, inputDataType = Tensor, deviceCreationLocation = WinML)...[SUCCESS]
Evaluating (device = CPU, iteration = 9, inputBinding = CPU, inputDataType = Tensor, deviceCreationLocation = WinML)...[SUCCESS]
...
etc
...
Binding (device = GPU, iteration = 1, inputBinding = CPU, inputDataType = Tensor, deviceCreationLocation = WinML)...[SUCCESS]
[FAILED]
Evaluating (device = GPU, iteration = 1, inputBinding = CPU, inputDataType = Tensor, deviceCreationLocation = WinML)...[FAILED]

command:
WinMLRunner.exe -model D:\projects\PYTHON\u2net_matting_verify\u2net_matting_verify\utils\test.onnx -iterations 9 -perf -GPU

environment:
WinMLRunner.exe v1.0.1.0 x64 executable
win10 20H2 19042.685, Intel(R) Core(TM) i7-8700 CPU, RAM 32.0 GB x64
GPU: Intel UHD Graphics 630 GPU Driver Version: 25.20.100.6473

@ryanlai2
Copy link
Contributor

Hi @yahuuu, thanks for reaching out. May I ask if you can share your model with us for reproduction purposes?

@ryanlai2
Copy link
Contributor

ryanlai2 commented Mar 22, 2021

Can you also share your ONNX operator set version of the model? Are you using inbox or NuGet version of WinML? If using NuGet, which version are you using?

@yahuuu
Copy link
Author

yahuuu commented Mar 23, 2021

Hi @ryanlai2, This is the pytorch model I'm using.
This is a pre-trained model released by the MODNet community.
I referenced the official ONNX conversion code, pytorch -> onnx in Ubuntu16.04, run WinMLRunner.exe in Win10.

ubuntu version : )
pytorch==1.3.0 (gpu version)
torchvision==0.4.1
WinMLRunner.exe v1.0.1.0 && v1.2.2 x64
CUDA Version: 10.1
NVIDIA-SMI 418.39
Tesla K40m

my code : )
“”“
torch.onnx.export(modnet,
example,
export_onnx_file,
opset_version = 10,
export_params=True,
do_constant_folding = True,
input_names = ["input"],
output_names = ["output"],
dynamic_axes = {"input": {0: "batch_size"},
"output": {0: "batch_size"} })
”“”
If I use pytorch==1.7.1(cpu version), WinMLRunner won't even output the results under the CPU.

@yahuuu
Copy link
Author

yahuuu commented Mar 26, 2021

opset_version = 12,
WinMLRunner v1.2.2.zip
Waiting for your answer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants