Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I'm not getting the output #34

Open
muzair1103 opened this issue Aug 18, 2024 · 5 comments
Open

I'm not getting the output #34

muzair1103 opened this issue Aug 18, 2024 · 5 comments

Comments

@muzair1103
Copy link

I'm not sure what's going wrong. can you please help me out @zhouyuchong

root@omen16:~/face-recognition-deepstream/test# python3 face_test_demo.py
--------- start app
Unknown or legacy key specified 'alignment' for group [property]
Unknown or legacy key specified 'user-meta' for group [property]
localhost;9092;deepstream
(True, 'success, state ')

set index of src task-0 to 0
add set return :
Starting pipeline

**PERF: {'stream0': 0.0}

WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:06.875435534 4930 0x57447a9e2b60 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 2]: deserialized trt engine from :/root/face-recognition-deepstream/models/arcface_weights/arcfaceresnet100-8.onnx_b1_gpu0_fp16.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT data 3x112x112
1 OUTPUT kFLOAT fc1 512

0:00:06.997348438 4930 0x57447a9e2b60 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 2]: Use deserialized engine model: /root/face-recognition-deepstream/models/arcface_weights/arcfaceresnet100-8.onnx_b1_gpu0_fp16.engine
0:00:07.001029419 4930 0x57447a9e2b60 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 2]: Load new model:/root/face-recognition-deepstream/src/kbds/configs/face/config_arcface.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvTrackerParams::getConfigRoot()] !!![WARNING] File doesn't exist. Will go ahead with default values
[NvMultiObjectTracker] Initialized
0:00:07.013959838 4930 0x57447a9e2b60 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1244> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5

**PERF: {'stream0': 0.0}

0:00:11.112644081 4930 0x57447a9e2b60 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from :/root/face-recognition-deepstream/models/retinaface/retinaface_resnet50.onnx_b1_gpu0_fp16.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input0 3x640x640
1 OUTPUT kFLOAT output0 16800x4
2 OUTPUT kFLOAT 839 16800x10
3 OUTPUT kFLOAT 840 16800x2

0:00:11.233843179 4930 0x57447a9e2b60 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /root/face-recognition-deepstream/models/retinaface/retinaface_resnet50.onnx_b1_gpu0_fp16.engine
0:00:11.238144005 4930 0x57447a9e2b60 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 1]: Load new model:/root/face-recognition-deepstream/src/kbds/configs/face/config_retinaface.txt sucessfully
(True, 'success')
gstname= video/x-raw
pad name: sink_0
Decodebin linked to pipeline
mimetype is video/x-raw
src:
source_bin: <gi.GstURIDecodeBin object at 0x7cb6401f1c00 (GstURIDecodeBin at 0x57447a9d80a0)>

ERROR: Batch size not 1
0:00:12.279373687 4930 0x5744d91183f0 WARN nvinfer gstnvinfer.cpp:2420:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:12.279398454 4930 0x5744d91183f0 WARN nvinfer gstnvinfer.cpp:2420:gst_nvinfer_output_loop: error: streaming stopped, reason error (-5)
Traceback (most recent call last):
File "/root/face-recognition-deepstream/test/../src/kbds/core/pipeline.py", line 122, in bus_call_abs
search_index = search_index[0].split('-')[2][:-1]
TypeError: 'NoneType' object is not subscriptable
detect face-0 with resolution 40x44

**PERF: {'stream0': 1.61}

set index of src task-5 to 1
add set return :
(True, 'success')
src:
source_bin: <gi.GstURIDecodeBin object at 0x7cb693d9f440 (GstURIDecodeBin at 0x57448c60f7a0)>

gstname= video/x-raw
pad name: sink_1
Decodebin linked to pipeline

**PERF: {'stream0': 0.0}

get status of task-5: (True, 'success', <kbds.core.source.Stream object at 0x7cb63fe6f640>)
get status of task-1111: (False, 'source id(task-1111) not exist', None)

**PERF: {'stream0': 0.0}

======== delete src test =======
delete finished

**PERF: {'stream0': 0.0}

======== delete src test =======

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

@zhouyuchong
Copy link
Owner

@muzair1103 seems your onnx is generated with batch size 1, you should set nvstreammux and nvinfer bs to 1 as well.

@muzair1103
Copy link
Author

muzair1103 commented Aug 19, 2024

@zhouyuchong the issue still did not get solved

@zhouyuchong
Copy link
Owner

@muzair1103 your infer can only support batchsize 1, however, you have 2 sources input to pipeline.

@muzair1103
Copy link
Author

@zhouyuchong after the latest commit I’m trying to run the pipeline. Can I get the onnx models pls

@zhouyuchong
Copy link
Owner

@muzair1103 both models are generated use links in readme. You can download retinaface here, as for arcface, the tensorrtx directly uses wts weights. After all, it's not complicated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants