Skip to content

Conversation

vividf
Copy link
Collaborator

@vividf vividf commented Sep 26, 2025

Summary

This PR:

  • Introduces post-training quantization (PTQ) with INT8 precision for the CalibrationClassification component.
  • Adds support and evaluation scripts for both ONNX and TensorRT inference using INT8.

For easier review, the file name will remain visualize_lidar_camera_projection.py in this PR. It will be renamed to toolkit.py in a follow-up change.

Model Performance Comparison: FP16 vs INT8

Metric FP16 INT8
Total Samples 6,800 6,800
Correct Predictions 6,336 6,319
Accuracy 93.18% 92.93%
Average Latency 5.29 ms 2.46 ms

Observation:

  • INT8 is ~2.15× faster than FP16 (lower latency).
  • Accuracy drop is minimal (0.25 pp).

Per-Class Accuracy

Class Description FP16 Accuracy INT8 Accuracy
0 Miscalibrated 99.21% 99.53%
1 Calibrated 87.15% 86.32% 🔻

Observation:

  • INT8 slightly improves miscalibrated class detection.
  • Small drop (~0.8 pp) in calibrated class accuracy.

Change point

Explain for detail

Note

Test performed

  • Log
Log output

@vividf vividf requested a review from KSeangTan September 26, 2025 08:11
vividf and others added 18 commits October 1, 2025 14:01
Signed-off-by: vividf <[email protected]>
Signed-off-by: vividf <[email protected]>
Signed-off-by: vividf <[email protected]>
Signed-off-by: vividf <[email protected]>
Signed-off-by: vividf <[email protected]>
Signed-off-by: vividf <[email protected]>
Signed-off-by: vividf <[email protected]>
Signed-off-by: vividf <[email protected]>
@vividf
Copy link
Collaborator Author

vividf commented Oct 8, 2025

Will separate the PR

@vividf vividf closed this Oct 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant