Skip to content

RUNTIME_EXCEPTION : Non-zero status code returned while running If node. #23213

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
FFchopon opened this issue Dec 27, 2024 · 1 comment
Open
Labels
model:transformer issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc. stale issues that have not been addressed in a while; categorized by a bot

Comments

@FFchopon
Copy link

Describe the issue

I got an error while running the onnx model: Non-zero status code returned while running If node.

  • Specific error report:
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running If node. Name:'' Status Message: Non-zero status code returned while running If node. Name:'' Status Message: Non-zero status code returned while running GatherElements node. Name:'' Status Message: /software/onnxruntime/onnxruntime/core/providers/common.h:31 int64_t onnxruntime::HandleNegativeAxis(int64_t, int64_t) IsAxisInRange(axis, tensor_rank) was false. axis 1 is not in valid range [-1,0]

To reproduce

  1. Download the model
  2. Run the following script:
import onnx
import onnxruntime as ort
from onnxruntime.transformers import optimizer
import numpy as np

model_path = "38012.onnx"
optimized_model_path = f"./opt.onnx"

input_data = {
    'x': np.array([0.05135667, 0.49169028, 0.2317685, 0.7714388, 0.10556535,
                   0.7446228, 0.77673155, 0.4430063, 0.46662223, 0.4861915,
                   0.20035234, 0.5316502, 0.8579882, 0.01025127, 0.6977761,
                   0.7261769, 0.43401167, 0.77205604, 0.7047838, 0.8162092,
                   0.6132042, 0.59994316, 0.47380182, 0.89255065, 0.8315158,
                   0.7442334, 0.04432015, 0.9065669, 0.40642294, 0.8992343,
                   0.51915145, 0.96065384, 0.88932925, 0.7611128, 0.9960299,
                   0.04980307, 0.28150338, 0.6282024, 0.29980934, 0.6425377,
                   0.14627998, 0.38970673, 0.8462834, 0.8327852, 0.58551437,
                   0.5901391, 0.10345234, 0.1530145, 0.34184808, 0.96655446,
                   0.7688564, 0.10754307, 0.8349921, 0.46405357, 0.41953513,
                   0.24148946, 0.6070621, 0.00942784, 0.4898656, 0.3826074,
                   0.23482858, 0.5481194, 0.03267029, 0.36253762, 0.09667406,
                   0.5429725, 0.45386177, 0.66851056, 0.51004875, 0.39172414,
                   0.23835072, 0.02985721, 0.918535, 0.55690295, 0.97696245,
                   0.98216826, 0.6946321, 0.8859541, 0.1622831, 0.83600485,
                   0.4745072, 0.70302343, 0.3033251, 0.2943358, 0.77564985,
                   0.29016948, 0.84607005, 0.27696252, 0.9643625, 0.19356592,
                   0.78589076, 0.6827836, 0.98737943, 0.37815085, 0.1211899,
                   0.02344177, 0.97916144, 0.9867203, 0.7446272, 0.75813687,
                   0.31773388, 0.41267744, 0.12573875, 0.63623524, 0.09663095,
                   0.49160004, 0.6418833, 0.75377125, 0.48768246, 0.06855919,
                   0.4702471, 0.255228, 0.8079538, 0.5095185, 0.58212304,
                   0.06267849, 0.4565444, 0.00950742, 0.7498734, 0.04434598,
                   0.48962507, 0.3139298, 0.48399472, 0.44127202, 0.4732648,
                   0.3804463, 0.40799254, 0.24919167], dtype=np.float32),
    'x1': np.array([0.8426625, 0.9732153, 0.49775425, 0.05435705, 0.4693269,
                    0.2900393, 0.6734157, 0.6896115, 0.8811082, 0.11899561,
                    0.9244948, 0.94079465, 0.5876591, 0.23305634, 0.78063804,
                    0.17882146, 0.6678079, 0.70737696, 0.08595871, 0.05268361,
                    0.01278743, 0.25570008, 0.7130087, 0.1399794, 0.08106553,
                    0.5992047, 0.588875, 0.7871804, 0.7853509, 0.26299697,
                    0.8193554, 0.67199385, 0.6101456, 0.95636225, 0.5152923,
                    0.6044122, 0.44106615, 0.82251936, 0.54130244, 0.2778342,
                    0.601269, 0.6048449, 0.20572579, 0.3961332, 0.26576704,
                    0.24089175, 0.92432624, 0.5886368, 0.2728472, 0.01720504,
                    0.65580326, 0.91351014, 0.77888834, 0.60864544, 0.61413944,
                    0.7032979, 0.65464437, 0.1084903, 0.49285117, 0.9979988,
                    0.26293004, 0.38058266, 0.56481045, 0.7391961, 0.98462343,
                    0.02746766, 0.1915805, 0.799147, 0.29056203, 0.7198771,
                    0.79346496, 0.4845838, 0.2524755, 0.6142809, 0.29809123,
                    0.8227626, 0.78785723, 0.62629646, 0.8279695, 0.44274712,
                    0.76114076, 0.26292846, 0.00214652, 0.29157782, 0.33320805,
                    0.43552852, 0.03375685, 0.7057689, 0.75814784, 0.31626043,
                    0.24448082, 0.01732731, 0.3749923, 0.8667468, 0.7575453,
                    0.17516032, 0.33060876, 0.22861947, 0.4026713, 0.17343079,
                    0.691345, 0.62467605, 0.38594428, 0.3417037, 0.871786,
                    0.3767675, 0.9026966, 0.39513087, 0.98681647, 0.04550003,
                    0.5636926, 0.8291888, 0.93976754, 0.874003, 0.66336656,
                    0.76403767, 0.26931816, 0.8255282, 0.9449286, 0.22858198,
                    0.07249299, 0.5257493, 0.28457695, 0.08677769, 0.8126051,
                    0.78178006, 0.2609097, 0.28725886], dtype=np.float32)
}

sess_options = ort.SessionOptions()
sess_options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_DISABLE_ALL
original_session = ort.InferenceSession(model_path, sess_options, providers=["CPUExecutionProvider"])
original_output_names = [output.name for output in original_session.get_outputs()]
original_result = original_session.run(original_output_names, input_data)

optimized_model = optimizer.optimize_model(model_path, opt_level=99)
optimized_model.save_model_to_file(optimized_model_path)
optimized_session = ort.InferenceSession(optimized_model_path, providers=["CPUExecutionProvider"])
optimized_output_names = [output.name for output in optimized_session.get_outputs()]
optimized_result = optimized_session.run(optimized_output_names, input_data)
for r1, r2 in zip(original_result, optimized_result):
    np.testing.assert_allclose(r1, r2, atol=1e-3, rtol=1e-3)

Urgency

No response

Platform

Linux

OS Version

Ubuntu 20.04

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

5c1b7cc

ONNX Runtime API

Python

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

No response

@github-actions github-actions bot added the model:transformer issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc. label Dec 27, 2024
Copy link
Contributor

github-actions bot commented Feb 5, 2025

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Feb 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
model:transformer issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc. stale issues that have not been addressed in a while; categorized by a bot
Projects
None yet
Development

No branches or pull requests

1 participant