Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

训练分类模型,有2个类,使用训练模型推理,结果正常;转成ONNX模型推理时,分值始终为[0.5, 0.5] #3322

Closed
LiManshiang opened this issue Dec 11, 2024 · 6 comments
Assignees

Comments

@LiManshiang
Copy link

欢迎您使用PaddleClas并反馈相关问题,非常感谢您对PaddleClas的贡献!
提出issue时,辛苦您提供以下信息,方便我们快速定位问题并及时有效地解决您的问题:

  1. PaddleClas版本以及PaddlePaddle版本:请您提供您使用的版本号或分支信息,如PaddleClas release/2.2和PaddlePaddle 2.1.0
  2. 涉及的其他产品使用的版本号:如您在使用PaddleClas的同时还在使用其他产品,如PaddleServing、PaddleInference等,请您提供其版本号
  3. 训练环境信息:
    a. 具体操作系统,Linux
    b. Python版本号,如Python3.8
    c. CUDA/cuDNN版本, CUDA12.0/8.8
  4. 完整的代码(相比于repo中代码,有改动的地方)、详细的错误信息及相关log

相同的图片,2个类别

使用python/predict_cls.py 推理结果:【use_onnx=True】
class id(s): [1, 0], score(s): [0.50, 0.50], label_name(s): []

使用tools/infer.py 推理结果:【使用训练模型】
[{'class_ids': [1, 0], 'scores': [0.99987, 0.00013], 'label_names': []}]

训练模型转推理模型,推理模型转onnx模型,都是按照官网文档中提供的命令转换的。请帮忙分析下可能存在的问题?不胜感激!

@Bobholamovic
Copy link
Member

请问具体是什么模型呀?另外,使用推理模型(不转onnx)进行预测,结果正确吗?看起来有可能是模型依赖某种“动态”的信息(例如执行逻辑受到tensor数值影响),导致无法正确地转换为静态图。

@LiManshiang
Copy link
Author

请问具体是什么模型呀?另外,使用推理模型(不转onnx)进行预测,结果正确吗?看起来有可能是模型依赖某种“动态”的信息(例如执行逻辑受到tensor数值影响),导致无法正确地转换为静态图。

感谢您的即时回复!

如果使用推理模型,这边是有这样的报错。一直是推理不成功的。
命令如下:
λ 397424f8ea23 /home/PaddleClas/deploy python3 python/predict_cls.py \

-c ../cls_train/inference.yaml
-o Global.use_onnx=False
-o Global.use_gpu=False
-o Global.inference_model_dir=../cls_train/cls_inference_mv3
-o Global.infer_imgs=../data/test/candle.png

执行后:

grep: warning: GREP_OPTIONS is deprecated; please use an alias or script


C++ Traceback (most recent call last):

0 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&)
1 std::unique_ptr<paddle::PaddlePredictor, std::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
2 paddle::AnalysisPredictor::Init(std::shared_ptrpaddle::framework::Scope const&, std::shared_ptrpaddle::framework::ProgramDesc const&)
3 paddle::AnalysisPredictor::PrepareProgram(std::shared_ptrpaddle::framework::ProgramDesc const&)
4 paddle::framework::NaiveExecutor::CreateVariables(paddle::framework::ProgramDesc const&, int, bool, paddle::framework::Scope*)


Error Message Summary:

FatalError: Segmentation fault is detected by the operating system.
[TimeInfo: *** Aborted at 1733963621 (unix time) try "date -d @1733963621" if you are using GNU date ***]
[SignalInfo: *** SIGSEGV (@0x0) received by PID 88008 (TID 0x7fc95fcc1740) from PID 0 ***]

Segmentation fault (core dumped)

使用pip install paddleclas,使用paddleclas 进行推理也是如上的报错。

环境配置:
nvidia-cublas-cu12 12.3.4.1
nvidia-cuda-cupti-cu12 12.3.101
nvidia-cuda-nvrtc-cu12 12.3.107
nvidia-cuda-runtime-cu12 12.3.101
nvidia-cudnn-cu12 9.0.0.312
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-nccl-cu12 2.19.3
nvidia-nvjitlink-cu12 12.6.77
nvidia-nvtx-cu12 12.4.127
onnx 1.17.0
onnx-simplifier 0.4.36
onnxruntime 1.20.1
onnxruntime-gpu 1.20.1
opencv-python 4.6.0.66
opt-einsum 3.3.0
packaging 24.2
paddle-bfloat 0.1.7
paddle2onnx 1.2.0
paddleclas 2.6.0
paddlepaddle-gpu 2.6.0

镜像使用的是:
paddlepaddle/paddle:latest-dev-cuda12.3-cudnn9.0-trt8.6-gcc12.2
容器内CUDA版本
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Jan__6_16:45:21_PST_2023
Cuda compilation tools, release 12.0, V12.0.140
Build cuda_12.0.r12.0/compiler.32267302_0

任务是进行图像的分类。

配置文件分别尝试过:
MobileNetV3_small_x0_35.yaml
PPLCNet_x1_0_1.yaml
以上两种配置训练模型后,结果是一致的,现象如下:
1、使用训练模型推理正常
2、训练模型转推理模型报错
3、推理模型导出onnx模型对立,不能复现正确结果。

一、使用python/predict_cls.py 推理结果:【use_onnx=True】
class id(s): [1, 0], score(s): [0.50, 0.50], label_name(s): []

二、使用推理模型推理:

C++ Traceback (most recent call last):

0 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&)
1 std::unique_ptr<paddle::PaddlePredictor, std::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
2 paddle::AnalysisPredictor::Init(std::shared_ptrpaddle::framework::Scope const&, std::shared_ptrpaddle::framework::ProgramDesc const&)
3 paddle::AnalysisPredictor::PrepareProgram(std::shared_ptrpaddle::framework::ProgramDesc const&)
4 paddle::framework::NaiveExecutor::CreateVariables(paddle::framework::ProgramDesc const&, int, bool, paddle::framework::Scope*)


Error Message Summary:

FatalError: Segmentation fault is detected by the operating system.
[TimeInfo: *** Aborted at 1733963621 (unix time) try "date -d @1733963621" if you are using GNU date ***]
[SignalInfo: *** SIGSEGV (@0x0) received by PID 88008 (TID 0x7fc95fcc1740) from PID 0 ***]

Segmentation fault (core dumped)

三、使用tools/infer.py 推理结果:【使用训练模型】
[{'class_ids': [1, 0], 'scores': [0.99987, 0.00013], 'label_names': []}]

烦请帮忙分析下可能存在的问题,谢谢~

@Bobholamovic
Copy link
Member

请问训练模型转推理模型有报错或者警告吗?

@LiManshiang
Copy link
Author

请问训练模型转推理模型有报错或者警告吗?

以下是训练模型导出推理模型的输出:
执行命令:
λ 397424f8ea23 /home/PaddleClas python tools/export_model.py -c ./cls_train/PPLCNet_x1_0.yaml -o Global.pretrained_model=./cls_train/cls_output/best_model/model -o Global.save_inference_dir=./cls_train/cls_inference/model

输出日志:
[2024/12/12 04:11:37] ppcls INFO: train with paddle 2.6.0 and device Place(gpu:0)
W1212 04:11:37.709197 88091 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 8.9, Driver API Version: 12.4, Runtime API Version: 11.8
W1212 04:11:37.833151 88091 gpu_resources.cc:164] device: 0, cuDNN Version: 8.8.
[2024/12/12 04:11:38] ppcls INFO: Finish load pretrained model from ./cls_train/cls_output/best_model/model.pdparams
[2024/12/12 04:11:38] ppcls INFO: Finish load pretrained model from ./cls_train/cls_output/best_model/model.pdparams
I1212 04:11:40.196909 88091 program_interpreter.cc:212] New Executor is Running.
[2024/12/12 04:11:40] ppcls INFO: Export succeeded! The inference model exported has been saved in "./cls_train/cls_inference/model/inference".

看上去很正常。

唯一的一点是 cuDNN的版本问题。device: 0, cuDNN Version: 8.8,这个在模型训练时也会有,不知这个dnn的版本不同的话,是否会存在问题?

@Bobholamovic
Copy link
Member

看起来模型导出应该是正常的~为了确认是否是cuDNN的问题,可以尝试使用CPU推理,看看结果是否正常

@TingquanGao
Copy link
Collaborator

The issue has no response for a long time and will be closed. You can reopen or new another issue if are still confused.


From Bot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants