Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pre-commit.ci] pre-commit autoupdate #490

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
rev: v5.0.0
hooks:
- id: check-docstring-first
- id: check-toml
Expand All @@ -9,21 +9,21 @@ repos:
- id: end-of-file-fixer

- repo: https://github.com/PyCQA/flake8
rev: 6.0.0
rev: 7.1.1
hooks:
- id: flake8
args: [--config=setup.cfg]

- repo: https://github.com/omnilib/ufmt
rev: v2.0.1
rev: v2.8.0
hooks:
- id: ufmt
additional_dependencies:
- black == 22.3.0
- usort == 1.0.2

- repo: https://github.com/executablebooks/mdformat
rev: 0.7.16
rev: 0.7.21
hooks:
- id: mdformat
additional_dependencies:
Expand Down
2 changes: 1 addition & 1 deletion deployment/libtorch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The LibTorch inference for `yolort`, both GPU and CPU are supported.

- LibTorch 1.8.0+ together with corresponding TorchVision 0.9.0+
- OpenCV
- CUDA 10.2+ \[Optional\]
- CUDA 10.2+ [Optional]

*We didn't impose too strong restrictions on the version of CUDA.*

Expand Down
4 changes: 2 additions & 2 deletions deployment/onnxruntime/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The ONNX Runtime inference for `yolort`, both CPU and GPU are supported.

- ONNX Runtime 1.7+
- OpenCV
- CUDA \[Optional\]
- CUDA [Optional]

*We didn't impose too strong restrictions on the versions of dependencies.*

Expand All @@ -30,7 +30,7 @@ The ONNX model exported by yolort differs from other pipeline in the following t

And then, you can find that a ONNX model ("best.onnx") have been generated in the directory of "best.pt". Set the `size_divisible` here according to your model, 32 for P5 ("yolov5s.pt" for instance) and 64 for P6 ("yolov5s6.pt" for instance).

1. \[Optional\] Quick test with the ONNX Runtime Python interface.
1. [Optional] Quick test with the ONNX Runtime Python interface.

```python
from yolort.runtime import PredictorORT
Expand Down
2 changes: 1 addition & 1 deletion deployment/ppq/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ The ppq int8 ptq example of `yolort`.

## Usage

Here we will mainly discuss how to use the ppq interface, we recommend that you check out [tutorial](https://github.com/openppl-public/ppq/tree/master/ppq/samples) first. This code can be used to do the following stuff:
Here we will mainly discuss how to use the ppq interface, we recommend that you check out [tutorial](https://github.com/openppl-public/ppq/tree/master/ppq/samples) first. This code can be used to do the following stuff:

1. Distill your calibration data (Optional: If you don't have images for calibration and bn is in your model, you can use this)

Expand Down
4 changes: 2 additions & 2 deletions deployment/tensorrt/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Here we will mainly discuss how to use the C++ interface, we recommend that you
trtexec --onnx=best.trt.onnx --saveEngine=best.engine --workspace=8192
```

1. \[Optional\] Quick test with the TensorRT Python interface.
1. [Optional] Quick test with the TensorRT Python interface.

```python
import torch
Expand Down Expand Up @@ -58,7 +58,7 @@ Here we will mainly discuss how to use the C++ interface, we recommend that you
cmake --build . # Can also use the yolort_trt.sln to build on Windows System
```

- \[Windows System Only\] Copy following dependent dynamic link libraries (xxx.dll) to Release/Debug directory
- [Windows System Only] Copy following dependent dynamic link libraries (xxx.dll) to Release/Debug directory

- cudnn_cnn_infer64_8.dll, cudnn_ops_infer64_8.dll, cudnn64_8.dll, nvinfer.dll, nvinfer_plugin.dll, nvonnxparser.dll, zlibwapi.dll (On which CUDA and cudnn depend)
- opencv_corexxx.dll opencv_imgcodecsxxx.dll opencv_imgprocxxx.dll (Subsequent dependencies by OpenCV or you can also use Static OpenCV Library)
Expand Down
Loading