Skip to content

Latest commit

 

History

History
1637 lines (1269 loc) · 133 KB

README.zh.md

File metadata and controls

1637 lines (1269 loc) · 133 KB

Lite.AI.ToolKit 🚀🚀🌟: 一个开箱即用的C++ AI模型工具箱


English | 中文文档 | MacOS | Linux | Windows


Lite.AI.ToolKit 🚀🚀🌟: 一个轻量级的C++ AI模型工具箱,用户友好(还行吧),开箱即用。已经包括 70+ 流行的开源模型,如最新的RVM, YOLOX, YOLOP, YOLOR, YoloV5, DeepLabV3, ArcFace等模型,还会继续增加😎。这是一个根据个人兴趣整理的C++工具箱,emmm😞 ... 其实还不是很完善,编个lib来用还是可以的。关于规划,其实没什么很明确的规划,看到一些有意思的算法可能会把它捏进来,随缘吧。 个人的兴趣目前主要集中在检测、分割、抠图、识别和目标跟踪等领域。 Lite.AI.ToolKit 默认是基于 ONNXRuntime C++ 推理引擎的,后期会陆续加入对 NCNN, MNNTNN 的支持,已经支持部分模型的MNN、NCNN和TNN推理。目前主要考虑易用性。需要更高性能支持的小伙伴可以基于本项目提供的C++实现和ONNX文件进行优化~ 有问题欢迎提issue,会尽量回答 ~👏👋

核心特征 🚀🚀🌟

  • ❤️ 用户友好,开箱即用。 使用简单一致的调用语法,如lite::cv::Type::Class,详见examples.
  • 少量依赖,构建容易。 目前, 默认只依赖 OpenCVONNXRuntime,详见build
  • ❤️ 众多的算法模块,且持续更新。 目前,包括 10+ 算法模块、70+ 流行的开源模型以及 500+ .onnx/.mnn/.param&bin(ncnn)/.tnnmodel&tnnproto 权重文件., 涵盖目标检测人脸检测人脸识别语义分割抠图等领域。详见 Model Zoolite.ai.toolkit.hub.onnx.md 。更多的新模型将会不断地加入进来 ~ 😎

若是有用,❤️不妨给个⭐️🌟支持一下吧~ 🙃🤪🍀

重要更新 !!!

展开更多更新

更多更新 !!!

目录

1. 编译Lite.AI.ToolKit 🚀🚀🌟

Lite.AI.ToolKit 源码编译MacOS下的动态库。需要注意的是Lite.AI.ToolKit 使用onnxruntime作为默认的后端,因为onnxruntime支持大部分onnx的原生算子,具有更高的易用性。如何编译Linux和Windows版本?点击 ▶️ 查看。

⚠️ Linux 和 Windows

Linux 和 Windows

⚠️ Lite.AI.ToolKit 的发行版本目前不直接支持Linux和Windows,你需要从下载Lite.AI.ToolKit的源码进行构建。首先,你需要下载(如果有官方编译好的发行版本的话)或编译OpenCVONNXRuntime 和其他你需要的推理引擎,如MNN、NCNN、TNN,然后把它们的头文件分别放入各自对应的文件夹,或者直接使用本项目提供的头文件。本项目的依赖库头文件是直接从相应的官方库拷贝而来的,但不同操作系统下的动态库需要重新编译或下载,MacOS用户可以直接使用本项目提供的各个依赖库的动态库。

  • lite.ai.toolkit/opencv2
      cp -r you-path-to-downloaded-or-built-opencv/include/opencv4/opencv2 lite.ai.toolkit/opencv2
  • lite.ai.toolkit/onnxruntime
      cp -r you-path-to-downloaded-or-built-onnxruntime/include/onnxruntime lite.ai.toolkit/onnxruntime
  • lite.ai.toolkit/MNN
      cp -r you-path-to-downloaded-or-built-MNN/include/MNN lite.ai.toolkit/MNN
  • lite.ai.toolkit/ncnn
      cp -r you-path-to-downloaded-or-built-ncnn/include/ncnn lite.ai.toolkit/ncnn
  • lite.ai.toolkit/tnn
      cp -r you-path-to-downloaded-or-built-TNN/include/tnn lite.ai.toolkit/tnn

然后把各个依赖库拷贝到lite.ai.toolkit/lib 文件夹。 请参考依赖库的编译文档1

  • lite.ai.toolkit/lib

      cp you-path-to-downloaded-or-built-opencv/lib/*opencv* lite.ai.toolkit/lib
      cp you-path-to-downloaded-or-built-onnxruntime/lib/*onnxruntime* lite.ai.toolkit/lib
      cp you-path-to-downloaded-or-built-MNN/lib/*MNN* lite.ai.toolkit/lib
      cp you-path-to-downloaded-or-built-ncnn/lib/*ncnn* lite.ai.toolkit/lib
      cp you-path-to-downloaded-or-built-TNN/lib/*TNN* lite.ai.toolkit/lib
  • Windows: 你可以参考issue#6 ,讨论了常见的编译问题。

  • Linux: 参考MacOS下的编译,替换Linux版本的依赖库即可。Linux下的发行版本将会在近期添加 ~ issue#2

  • 令人开心的消息!!! : 🚀 你可以直接下载最新的ONNXRuntime官方构建的动态库,包含Windows, Linux, MacOS and Arm的版本!!! CPU和GPU的版本均可获得。不需要再从源码进行编译了,nice。可以从v1.8.1 下载最新的动态库. 我目前在Lite.AI.ToolKit中用的是1.7.0,你可以从v1.7.0 下载, 但1.8.1应该也是可行的。对于OpenCV,请尝试从源码构建(Linux) 或者 直接从OpenCV 4.5.3 下载官方编译好的动态库(Windows). 然后把头文件和依赖库放入上述的文件夹中.

    git clone --depth=1 https://github.com/DefTruth/lite.ai.toolkit.git  # 最新源码
    cd lite.ai.toolkit && sh ./build.sh  # 对于MacOS, 你可以直接利用本项目包含的OpenCV, ONNXRuntime, MNN, NCNN and TNN依赖库,无需重新编译
  • Windows GPU 兼容性: 详见issue#10.

  • Linux GPU 兼容性: 详见issue#97.

  • 你可参考以下的CMakeLists.txt设置来链接动态库.

cmake_minimum_required(VERSION 3.17)
project(lite.ai.toolkit.demo)

set(CMAKE_CXX_STANDARD 11)

# setting up lite.ai.toolkit
set(LITE_AI_DIR ${CMAKE_SOURCE_DIR}/lite.ai.toolkit)
set(LITE_AI_INCLUDE_DIR ${LITE_AI_DIR}/include)
set(LITE_AI_LIBRARY_DIR ${LITE_AI_DIR}/lib)
include_directories(${LITE_AI_INCLUDE_DIR})
link_directories(${LITE_AI_LIBRARY_DIR})

set(OpenCV_LIBS
        opencv_highgui
        opencv_core
        opencv_imgcodecs
        opencv_imgproc
        opencv_video
        opencv_videoio
        )
# add your executable
set(EXECUTABLE_OUTPUT_PATH ${CMAKE_SOURCE_DIR}/examples/build)

add_executable(lite_rvm examples/test_lite_rvm.cpp)
target_link_libraries(lite_rvm
        lite.ai.toolkit
        onnxruntime
        MNN  # need, if built lite.ai.toolkit with ENABLE_MNN=ON,  default OFF
        ncnn # need, if built lite.ai.toolkit with ENABLE_NCNN=ON, default OFF 
        TNN  # need, if built lite.ai.toolkit with ENABLE_TNN=ON,  default OFF 
        ${OpenCV_LIBS})  # link lite.ai.toolkit & other libs.
如何链接Lite.AI.ToolKit动态库?
cd ./build/lite.ai.toolkit/lib && otool -L liblite.ai.toolkit.0.0.1.dylib 
liblite.ai.toolkit.0.0.1.dylib:
        @rpath/liblite.ai.toolkit.0.0.1.dylib (compatibility version 0.0.1, current version 0.0.1)
        @rpath/libopencv_highgui.4.5.dylib (compatibility version 4.5.0, current version 4.5.2)
        @rpath/libonnxruntime.1.7.0.dylib (compatibility version 0.0.0, current version 1.7.0)
        ...
cd ../ && tree .
├── bin
├── include
│   ├── lite
│   │   ├── backend.h
│   │   ├── config.h
│   │   └── lite.h
│   └── ort
└── lib
    └── liblite.ai.toolkit.0.0.1.dylib
  • 运行已经编译好的examples:
cd ./build/lite.ai.toolkit/bin && ls -lh | grep lite
-rwxr-xr-x  1 root  staff   301K Jun 26 23:10 liblite.ai.toolkit.0.0.1.dylib
...
-rwxr-xr-x  1 root  staff   196K Jun 26 23:10 lite_yolov4
-rwxr-xr-x  1 root  staff   196K Jun 26 23:10 lite_yolov5
...
./lite_yolov5
LITEORT_DEBUG LogId: ../../../hub/onnx/cv/yolov5s.onnx
=============== Input-Dims ==============
...
detected num_anchors: 25200
generate_bboxes num: 66
Default Version Detected Boxes Num: 5
  • 为了链接lite.ai.toolkit动态库,你需要确保OpenCV and onnxruntime也被正确地链接。比如:
cmake_minimum_required(VERSION 3.17)
project(lite.ai.toolkit.demo)

set(CMAKE_CXX_STANDARD 11)

# setting up lite.ai.toolkit
set(LITE_AI_DIR ${CMAKE_SOURCE_DIR}/lite.ai.toolkit)
set(LITE_AI_INCLUDE_DIR ${LITE_AI_DIR}/include)
set(LITE_AI_LIBRARY_DIR ${LITE_AI_DIR}/lib)
include_directories(${LITE_AI_INCLUDE_DIR})
link_directories(${LITE_AI_LIBRARY_DIR})

set(OpenCV_LIBS
        opencv_highgui
        opencv_core
        opencv_imgcodecs
        opencv_imgproc
        opencv_video
        opencv_videoio
        )
# add your executable
set(EXECUTABLE_OUTPUT_PATH ${CMAKE_SOURCE_DIR}/examples/build)

add_executable(lite_rvm examples/test_lite_rvm.cpp)
target_link_libraries(lite_rvm
        lite.ai.toolkit
        onnxruntime
        MNN
        ncnn
        ${OpenCV_LIBS})  # link lite.ai.toolkit & other libs.

你可以在lite.ai.toolkit.demo 中找到一个简单且完整的,关于如何正确地链接Lite.AI.ToolKit动态库的应用案例。

2. 模型下载

Lite.AI.ToolKit 目前包括 70+ 流行的开源模型以及 500+ .onnx/.mnn/.param&bin(ncnn)/.tnnmodel&tnnproto 文件,大部分文件是我自己转换的。你可以通过lite::cv::Type::Class 语法进行调用,如 lite::cv::detection::YoloV5。更多的细节见Examples for Lite.AI.ToolKit

命名空间和Lite.AI.ToolKit算法模块的对应关系

命名空间和Lite.AI.ToolKit算法模块的对应关系

Namepace Details
lite::cv::detection Object Detection. one-stage and anchor-free detectors, YoloV5, YoloV4, SSD, etc. ✅
lite::cv::classification Image Classification. DensNet, ShuffleNet, ResNet, IBNNet, GhostNet, etc. ✅
lite::cv::faceid Face Recognition. ArcFace, CosFace, CurricularFace, etc. ❇️
lite::cv::face Face Analysis. detect, align, pose, attr, etc. ❇️
lite::cv::face::detect Face Detection. UltraFace, RetinaFace, FaceBoxes, PyramidBox, etc. ❇️
lite::cv::face::align Face Alignment. PFLD(106), FaceLandmark1000(1000 landmarks), PRNet, etc. ❇️
lite::cv::face::pose Head Pose Estimation. FSANet, etc. ❇️
lite::cv::face::attr Face Attributes. Emotion, Age, Gender. EmotionFerPlus, VGG16Age, etc. ❇️
lite::cv::segmentation Object Segmentation. Such as FCN, DeepLabV3, etc. ⚠️
lite::cv::style Style Transfer. Contains neural style transfer now, such as FastStyleTransfer. ⚠️
lite::cv::matting Image Matting. Object and Human matting. ⚠️
lite::cv::colorization Colorization. Make Gray image become RGB. ⚠️
lite::cv::resolution Super Resolution. ⚠️

Lite.AI.ToolKit的类与权重文件对应关系说明

Lite.AI.ToolKit的类与权重文件对应关系说明,可以在lite.ai.toolkit.hub.onnx.md 中找到。比如, lite::cv::detection::YoloV5lite::cv::detection::YoloX 的权重文件为:

Class Pretrained ONNX Files Rename or Converted From (Repo) Size
lite::cv::detection::YoloV5 yolov5l.onnx yolov5 (🔥🔥💥↑) 188Mb
lite::cv::detection::YoloV5 yolov5m.onnx yolov5 (🔥🔥💥↑) 85Mb
lite::cv::detection::YoloV5 yolov5s.onnx yolov5 (🔥🔥💥↑) 29Mb
lite::cv::detection::YoloV5 yolov5x.onnx yolov5 (🔥🔥💥↑) 351Mb
lite::cv::detection::YoloX yolox_x.onnx YOLOX (🔥🔥!!↑) 378Mb
lite::cv::detection::YoloX yolox_l.onnx YOLOX (🔥🔥!!↑) 207Mb
lite::cv::detection::YoloX yolox_m.onnx YOLOX (🔥🔥!!↑) 97Mb
lite::cv::detection::YoloX yolox_s.onnx YOLOX (🔥🔥!!↑) 34Mb
lite::cv::detection::YoloX yolox_tiny.onnx YOLOX (🔥🔥!!↑) 19Mb
lite::cv::detection::YoloX yolox_nano.onnx YOLOX (🔥🔥!!↑) 3.5Mb

这意味着,你可以通过Lite.AI.ToolKit中的同一个类,根据你的使用情况,加载任意一个yolov5*.onnxyolox_*.onnx,如 YoloV5, YoloX等.

auto *yolov5 = new lite::cv::detection::YoloV5("yolov5x.onnx");  // for server
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5l.onnx"); 
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5m.onnx");  
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5s.onnx");  // for mobile device 
auto *yolox = new lite::cv::detection::YoloX("yolox_x.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_l.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_m.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_s.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_tiny.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_nano.onnx");  // 3.5Mb only !
Class Size From Awesome File Type State Usage
YoloV5 28M yolov5 🔥🔥💥↑ detection demo
YoloV3 236M onnx-models 🔥🔥🔥↑ detection demo
TinyYoloV3 33M onnx-models 🔥🔥🔥↑ detection demo
YoloV4 176M YOLOv4... 🔥🔥🔥↑ detection demo
SSD 76M onnx-models 🔥🔥🔥↑ detection demo
SSDMobileNetV1 27M onnx-models 🔥🔥🔥↑ detection demo
YoloX 3.5M YOLOX 🔥🔥🔥↑ detection demo
TinyYoloV4VOC 22M yolov4-tiny... 🔥🔥↑ detection demo
TinyYoloV4COCO 22M yolov4-tiny... 🔥🔥↑ detection demo
YoloR 39M yolor 🔥🔥↑ detection demo
ScaledYoloV4 270M ScaledYOLOv4 🔥🔥🔥↑ detection demo
EfficientDet 15M ...EfficientDet... 🔥🔥🔥↑ detection demo
EfficientDetD7 220M ...EfficientDet... 🔥🔥🔥↑ detection demo
EfficientDetD8 322M ...EfficientDet... 🔥🔥🔥↑ detection demo
YOLOP 30M YOLOP 🔥🔥↑ detection demo
NanoDet 1.1M nanodet 🔥🔥🔥↑ detection demo
NanoDetEfficientNetLite 12M nanodet 🔥🔥🔥↑ detection demo
YoloX_V_0_1_1 3.5M YOLOX 🔥🔥🔥↑ detection demo
YoloV5_V_6_0 28M yolov5 🔥🔥💥↑ detection demo
Class Size From Awesome File Type State Usage
YoloV5 28M yolov5 🔥🔥💥↑ detection demo
YoloX 3.5M YOLOX 🔥🔥🔥↑ detection demo
YOLOP 30M YOLOP 🔥🔥↑ detection demo
NanoDet 1.1M nanodet 🔥🔥🔥↑ detection demo
NanoDetEfficientNetLite 12M nanodet 🔥🔥🔥↑ detection demo
YoloX_V_0_1_1 3.5M YOLOX 🔥🔥🔥↑ detection demo
YoloR 39M yolor 🔥🔥↑ detection demo
YoloV5_V_6_0 28M yolov5 🔥🔥💥↑ detection demo
Class Size From Awesome File Type State Usage
YoloV5 28M yolov5 🔥🔥💥↑ detection demo
YoloX 3.5M YOLOX 🔥🔥🔥↑ detection demo
YOLOP 30M YOLOP 🔥🔥↑ detection demo
NanoDet 1.1M nanodet 🔥🔥🔥↑ detection demo
NanoDetEfficientNetLite 12M nanodet 🔥🔥🔥↑ detection demo
NanoDetDepreciated 1.1M nanodet 🔥🔥🔥↑ detection demo
NanoDetEfficientNetLiteD... 12M nanodet 🔥🔥🔥↑ detection demo
YoloX_V_0_1_1 3.5M YOLOX 🔥🔥🔥↑ detection demo
YoloR 39M yolor 🔥🔥↑ detection demo
YoloRssss 39M yolor 🔥🔥↑ detection demo
YoloV5_V_6_0 28M yolov5 🔥🔥💥↑ detection demo
YoloV5_V_6_0_P6 28M yolov5 🔥🔥💥↑ detection demo
Class Size From Awesome File Type State Usage
YoloV5 28M yolov5 🔥🔥💥↑ detection demo
YoloX 3.5M YOLOX 🔥🔥🔥↑ detection demo
YOLOP 30M YOLOP 🔥🔥↑ detection demo
NanoDet 1.1M nanodet 🔥🔥🔥↑ detection demo
NanoDetEfficientNetLite 12M nanodet 🔥🔥🔥↑ detection demo
YoloX_V_0_1_1 3.5M YOLOX 🔥🔥🔥↑ detection demo
YoloR 39M yolor 🔥🔥↑ detection demo
YoloV5_V_6_0 28M yolov5 🔥🔥💥↑ detection demo
  • 人脸识别
Class Size From Awesome File Type State Usage
GlintArcFace 92M insightface 🔥🔥🔥↑ faceid demo
GlintCosFace 92M insightface 🔥🔥🔥↑ faceid demo
GlintPartialFC 170M insightface 🔥🔥🔥↑ faceid demo
FaceNet 89M facenet... 🔥🔥🔥↑ faceid demo
FocalArcFace 166M face.evoLVe... 🔥🔥🔥↑ faceid demo
FocalAsiaArcFace 166M face.evoLVe... 🔥🔥🔥↑ faceid demo
TencentCurricularFace 249M TFace 🔥🔥↑ faceid demo
TencentCifpFace 130M TFace 🔥🔥↑ faceid demo
CenterLossFace 280M center-loss... 🔥🔥↑ faceid demo
SphereFace 80M sphere... 🔥🔥↑ faceid ✅️ demo
PoseRobustFace 92M DREAM 🔥🔥↑ faceid ✅️ demo
NaivePoseRobustFace 43M DREAM 🔥🔥↑ faceid ✅️ demo
MobileFaceNet 3.8M MobileFace... 🔥🔥↑ faceid demo
CavaGhostArcFace 15M cavaface... 🔥🔥↑ faceid demo
CavaCombinedFace 250M cavaface... 🔥🔥↑ faceid demo
MobileSEFocalFace 4.5M face_recog... 🔥🔥↑ faceid demo
Class Size From Awesome File Type State Usage
GlintArcFace 92M insightface 🔥🔥🔥↑ faceid demo
GlintCosFace 92M insightface 🔥🔥🔥↑ faceid demo
GlintPartialFC 170M insightface 🔥🔥🔥↑ faceid demo
FaceNet 89M facenet... 🔥🔥🔥↑ faceid demo
FocalArcFace 166M face.evoLVe... 🔥🔥🔥↑ faceid demo
FocalAsiaArcFace 166M face.evoLVe... 🔥🔥🔥↑ faceid demo
TencentCurricularFace 249M TFace 🔥🔥↑ faceid demo
TencentCifpFace 130M TFace 🔥🔥↑ faceid demo
CenterLossFace 280M center-loss... 🔥🔥↑ faceid demo
SphereFace 80M sphere... 🔥🔥↑ faceid ✅️ demo
MobileFaceNet 3.8M MobileFace... 🔥🔥↑ faceid demo
CavaGhostArcFace 15M cavaface... 🔥🔥↑ faceid demo
CavaCombinedFace 250M cavaface... 🔥🔥↑ faceid demo
MobileSEFocalFace 4.5M face_recog... 🔥🔥↑ faceid demo
Class Size From Awesome File Type State Usage
GlintArcFace 92M insightface 🔥🔥🔥↑ faceid demo
GlintCosFace 92M insightface 🔥🔥🔥↑ faceid demo
GlintPartialFC 170M insightface 🔥🔥🔥↑ faceid demo
FaceNet 89M facenet... 🔥🔥🔥↑ faceid demo
FocalArcFace 166M face.evoLVe... 🔥🔥🔥↑ faceid demo
FocalAsiaArcFace 166M face.evoLVe... 🔥🔥🔥↑ faceid demo
TencentCurricularFace 249M TFace 🔥🔥↑ faceid demo
TencentCifpFace 130M TFace 🔥🔥↑ faceid demo
CenterLossFace 280M center-loss... 🔥🔥↑ faceid demo
SphereFace 80M sphere... 🔥🔥↑ faceid ✅️ demo
MobileFaceNet 3.8M MobileFace... 🔥🔥↑ faceid demo
CavaGhostArcFace 15M cavaface... 🔥🔥↑ faceid demo
CavaCombinedFace 250M cavaface... 🔥🔥↑ faceid demo
MobileSEFocalFace 4.5M face_recog... 🔥🔥↑ faceid demo
Class Size From Awesome File Type State Usage
GlintArcFace 92M insightface 🔥🔥🔥↑ faceid demo
GlintCosFace 92M insightface 🔥🔥🔥↑ faceid demo
GlintPartialFC 170M insightface 🔥🔥🔥↑ faceid demo
FaceNet 89M facenet... 🔥🔥🔥↑ faceid demo
FocalArcFace 166M face.evoLVe... 🔥🔥🔥↑ faceid demo
FocalAsiaArcFace 166M face.evoLVe... 🔥🔥🔥↑ faceid demo
TencentCurricularFace 249M TFace 🔥🔥↑ faceid demo
TencentCifpFace 130M TFace 🔥🔥↑ faceid demo
CenterLossFace 280M center-loss... 🔥🔥↑ faceid demo
SphereFace 80M sphere... 🔥🔥↑ faceid ✅️ demo
MobileFaceNet 3.8M MobileFace... 🔥🔥↑ faceid demo
CavaGhostArcFace 15M cavaface... 🔥🔥↑ faceid demo
CavaCombinedFace 250M cavaface... 🔥🔥↑ faceid demo
MobileSEFocalFace 4.5M face_recog... 🔥🔥↑ faceid demo
  • 抠图
Class Size From Awesome File Type State Usage
RobustVideoMatting 14M RobustVideoMatting 🔥🔥🔥↑ matting demo
Class Size From Awesome File Type State Usage
RobustVideoMatting 14M RobustVideoMatting 🔥🔥🔥↑ matting demo
Class Size From Awesome File Type State Usage
RobustVideoMatting 14M RobustVideoMatting 🔥🔥🔥↑ matting ⚠️ code
Class Size From Awesome File Type State Usage
RobustVideoMatting 14M RobustVideoMatting 🔥🔥🔥↑ matting ✅️ demo
  • 人脸检测
Class Size From Awesome File Type State Usage
UltraFace 1.1M Ultra-Light... 🔥🔥🔥↑ face::detect demo
RetinaFace 1.6M ...Retinaface 🔥🔥🔥↑ face::detect demo
FaceBoxes 3.8M FaceBoxes 🔥🔥↑ face::detect demo
  • 人脸对齐
Class Size From Awesome File Type State Usage
PFLD 1.0M pfld_106_... 🔥🔥↑ face::align demo
PFLD98 4.8M PFLD... 🔥🔥↑ face::align ✅️ demo
MobileNetV268 9.4M ...landmark 🔥🔥↑ face::align ✅️️ demo
MobileNetV2SE68 11M ...landmark 🔥🔥↑ face::align ✅️️ demo
PFLD68 2.8M ...landmark 🔥🔥↑ face::align ✅️ demo
FaceLandmark1000 2.0M FaceLandm... 🔥↑ face::align ✅️ demo
  • 头部姿态估计
Class Size From Awesome File Type State Usage
FSANet 1.2M ...fsanet... 🔥↑ face::pose demo
  • 人脸属性识别
Class Size From Awesome File Type State Usage
AgeGoogleNet 23M onnx-models 🔥🔥🔥↑ face::attr demo
GenderGoogleNet 23M onnx-models 🔥🔥🔥↑ face::attr demo
EmotionFerPlus 33M onnx-models 🔥🔥🔥↑ face::attr demo
VGG16Age 514M onnx-models 🔥🔥🔥↑ face::attr demo
VGG16Gender 512M onnx-models 🔥🔥🔥↑ face::attr demo
SSRNet 190K SSR_Net... 🔥↑ face::attr demo
EfficientEmotion7 15M face-emo... 🔥↑ face::attr ✅️ demo
EfficientEmotion8 15M face-emo... 🔥↑ face::attr demo
MobileEmotion7 13M face-emo... 🔥↑ face::attr demo
ReXNetEmotion7 30M face-emo... 🔥↑ face::attr demo
  • 图像分类
Class Size From Awesome File Type State Usage
EfficientNetLite4 49M onnx-models 🔥🔥🔥↑ classification demo
ShuffleNetV2 8.7M onnx-models 🔥🔥🔥↑ classification demo
DenseNet121 30.7M torchvision 🔥🔥🔥↑ classification demo
GhostNet 20M torchvision 🔥🔥🔥↑ classification demo
HdrDNet 13M torchvision 🔥🔥🔥↑ classification demo
IBNNet 97M torchvision 🔥🔥🔥↑ classification demo
MobileNetV2 13M torchvision 🔥🔥🔥↑ classification demo
ResNet 44M torchvision 🔥🔥🔥↑ classification demo
ResNeXt 95M torchvision 🔥🔥🔥↑ classification demo
  • 语义分割
Class Size From Awesome File Type State Usage
DeepLabV3ResNet101 232M torchvision 🔥🔥🔥↑ segmentation demo
FCNResNet101 207M torchvision 🔥🔥🔥↑ segmentation demo
  • 风格迁移
Class Size From Awesome File Type State Usage
FastStyleTransfer 6.4M onnx-models 🔥🔥🔥↑ style demo
  • 图片着色
Class Size From Awesome File Type State Usage
Colorizer 123M colorization 🔥🔥🔥↑ colorization demo
  • 超分辨率
Class Size From Awesome File Type State Usage
SubPixelCNN 234K ...PIXEL... 🔥↑ resolution demo

3. 应用案例.

更多的应用案例详见lite.ai.toolkit.examples 。点击 ▶️ 可以看到该主题下更多的案例。

案例0: 使用YoloV5 进行目标检测。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/yolov5s.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_yolov5_1.jpg";

  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path); 
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolov5->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);  
  
  delete yolov5;
}

输出的结果是:

或者你可以使用最新的 🔥🔥 ! YOLO 系列检测器YOLOXYoloR ,它们会获得接近的结果。


案例1: 使用RobustVideoMatting2021🔥🔥🔥 进行视频抠图。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/rvm_mobilenetv3_fp32.onnx";
  std::string video_path = "../../../examples/lite/resources/test_lite_rvm_0.mp4";
  std::string output_path = "../../../logs/test_lite_rvm_0.mp4";
  
  auto *rvm = new lite::cv::matting::RobustVideoMatting(onnx_path, 16); // 16 threads
  std::vector<lite::types::MattingContent> contents;
  
  // 1. video matting.
  rvm->detect_video(video_path, output_path, contents, false, 0.4f);
  
  delete rvm;
}

输出的结果是:



案例2: 使用FaceLandmarks1000 进行人脸1000关键点检测。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/FaceLandmark1000.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
  std::string save_img_path = "../../../logs/test_lite_face_landmarks_1000.jpg";
    
  auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);

  lite::types::Landmarks landmarks;
  cv::Mat img_bgr = cv::imread(test_img_path);
  face_landmarks_1000->detect(img_bgr, landmarks);
  lite::utils::draw_landmarks_inplace(img_bgr, landmarks);
  cv::imwrite(save_img_path, img_bgr);
  
  delete face_landmarks_1000;
}

输出的结果是:


案例3: 使用colorization 进行图像着色。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/eccv16-colorizer.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_eccv16_colorizer_1.jpg";
  
  auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
  
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::types::ColorizeContent colorize_content;
  colorizer->detect(img_bgr, colorize_content);
  
  if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat);
  delete colorizer;
}

输出的结果是:



案例4: 使用ArcFace 进行人脸识别。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx";
  std::string test_img_path0 = "../../../examples/lite/resources/test_lite_faceid_0.png";
  std::string test_img_path1 = "../../../examples/lite/resources/test_lite_faceid_1.png";
  std::string test_img_path2 = "../../../examples/lite/resources/test_lite_faceid_2.png";

  auto *glint_arcface = new lite::cv::faceid::GlintArcFace(onnx_path);

  lite::types::FaceContent face_content0, face_content1, face_content2;
  cv::Mat img_bgr0 = cv::imread(test_img_path0);
  cv::Mat img_bgr1 = cv::imread(test_img_path1);
  cv::Mat img_bgr2 = cv::imread(test_img_path2);
  glint_arcface->detect(img_bgr0, face_content0);
  glint_arcface->detect(img_bgr1, face_content1);
  glint_arcface->detect(img_bgr2, face_content2);

  if (face_content0.flag && face_content1.flag && face_content2.flag)
  {
    float sim01 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content1.embedding);
    float sim02 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content2.embedding);
    std::cout << "Detected Sim01: " << sim  << " Sim02: " << sim02 << std::endl;
  }

  delete glint_arcface;
}

输出的结果是:

Detected Sim01: 0.721159 Sim02: -0.0626267


案例5: 使用UltraFace 进行人脸检测。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ultraface-rfb-640.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_ultraface.jpg";
  std::string save_img_path = "../../../logs/test_lite_ultraface.jpg";

  auto *ultraface = new lite::cv::face::detect::UltraFace(onnx_path);

  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  ultraface->detect(img_bgr, detected_boxes);
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);

  delete ultraface;
}

输出的结果是:

⚠️ 展开Lite.AI.ToolKit所有算法模块的应用案例
3.1 目标检测应用案例

3.1 使用YoloV5 进行目标检测。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/yolov5s.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_yolov5_1.jpg";
  
  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path);
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolov5->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);
  
  delete yolov5;
}

输出的结果是:

或者你可以使用最新的 🔥🔥 ! YOLO 系列检测器YOLOXYoloR ,它们会获得接近的结果。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/yolox_s.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_yolox_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_yolox_1.jpg";

  auto *yolox = new lite::cv::detection::YoloX(onnx_path); 
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolox->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);  
  
  delete yolox;
}

输出的结果是:

目标检测更多可使用的算法包括:

auto *detector = new lite::cv::detection::YoloX(onnx_path);  // Newest YOLO detector !!! 2021-07
auto *detector = new lite::cv::detection::YoloV4(onnx_path); 
auto *detector = new lite::cv::detection::YoloV3(onnx_path); 
auto *detector = new lite::cv::detection::TinyYoloV3(onnx_path); 
auto *detector = new lite::cv::detection::SSD(onnx_path); 
auto *detector = new lite::cv::detection::YoloV5(onnx_path); 
auto *detector = new lite::cv::detection::YoloR(onnx_path);  // Newest YOLO detector !!! 2021-05
auto *detector = new lite::cv::detection::TinyYoloV4VOC(onnx_path); 
auto *detector = new lite::cv::detection::TinyYoloV4COCO(onnx_path); 
auto *detector = new lite::cv::detection::ScaledYoloV4(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDet(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDetD7(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDetD8(onnx_path); 
auto *detector = new lite::cv::detection::YOLOP(onnx_path); 
auto *detector = new lite::cv::detection::NanoDet(onnx_path); // Super fast and tiny!
3.2 人脸识别应用案例

3.2 使用ArcFace 进行人脸识别。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx";
  std::string test_img_path0 = "../../../examples/lite/resources/test_lite_faceid_0.png";
  std::string test_img_path1 = "../../../examples/lite/resources/test_lite_faceid_1.png";
  std::string test_img_path2 = "../../../examples/lite/resources/test_lite_faceid_2.png";

  auto *glint_arcface = new lite::cv::faceid::GlintArcFace(onnx_path);

  lite::types::FaceContent face_content0, face_content1, face_content2;
  cv::Mat img_bgr0 = cv::imread(test_img_path0);
  cv::Mat img_bgr1 = cv::imread(test_img_path1);
  cv::Mat img_bgr2 = cv::imread(test_img_path2);
  glint_arcface->detect(img_bgr0, face_content0);
  glint_arcface->detect(img_bgr1, face_content1);
  glint_arcface->detect(img_bgr2, face_content2);

  if (face_content0.flag && face_content1.flag && face_content2.flag)
  {
    float sim01 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content1.embedding);
    float sim02 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content2.embedding);
    std::cout << "Detected Sim01: " << sim  << " Sim02: " << sim02 << std::endl;
  }

  delete glint_arcface;
}

输出的结果是:

Detected Sim01: 0.721159 Sim02: -0.0626267

人脸识别更多可以使用的算法包括:

auto *recognition = new lite::cv::faceid::GlintCosFace(onnx_path);  // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintArcFace(onnx_path);  // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintPartialFC(onnx_path); // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::FaceNet(onnx_path);
auto *recognition = new lite::cv::faceid::FocalArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::FocalAsiaArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::TencentCurricularFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::TencentCifpFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::CenterLossFace(onnx_path);
auto *recognition = new lite::cv::faceid::SphereFace(onnx_path);
auto *recognition = new lite::cv::faceid::PoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::NaivePoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileFaceNet(onnx_path); // 3.8Mb only !
auto *recognition = new lite::cv::faceid::CavaGhostArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::CavaCombinedFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileSEFocalFace(onnx_path); // 4.5Mb only !
3.3 语义分割应用案例

3.3 使用DeepLabV3ResNet101 进行语义分割。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/deeplabv3_resnet101_coco.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_deeplabv3_resnet101.png";
  std::string save_img_path = "../../../logs/test_lite_deeplabv3_resnet101.jpg";

  auto *deeplabv3_resnet101 = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path, 16); // 16 threads

  lite::types::SegmentContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  deeplabv3_resnet101->detect(img_bgr, content);

  if (content.flag)
  {
    cv::Mat out_img;
    cv::addWeighted(img_bgr, 0.2, content.color_mat, 0.8, 0., out_img);
    cv::imwrite(save_img_path, out_img);
    if (!content.names_map.empty())
    {
      for (auto it = content.names_map.begin(); it != content.names_map.end(); ++it)
      {
        std::cout << it->first << " Name: " << it->second << std::endl;
      }
    }
  }
  delete deeplabv3_resnet101;
}

输出的结果是:

语义分割更多可用的算法包括:

auto *segment = new lite::cv::segmentation::FCNResNet101(onnx_path);
3.4 人脸属性估计应用案例

3.4 使用SSRNet 进行年龄估计。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ssrnet.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_ssrnet.jpg";
  std::string save_img_path = "../../../logs/test_lite_ssrnet.jpg";

  lite::cv::face::attr::SSRNet *ssrnet = new lite::cv::face::attr::SSRNet(onnx_path);

  lite::types::Age age;
  cv::Mat img_bgr = cv::imread(test_img_path);
  ssrnet->detect(img_bgr, age);
  lite::utils::draw_age_inplace(img_bgr, age);
  cv::imwrite(save_img_path, img_bgr);
  std::cout << "Default Version Done! Detected SSRNet Age: " << age.age << std::endl;

  delete ssrnet;
}

输出的结果是:

人脸属性估计更多可用的算法包括:

auto *attribute = new lite::cv::face::attr::AgeGoogleNet(onnx_path);  
auto *attribute = new lite::cv::face::attr::GenderGoogleNet(onnx_path); 
auto *attribute = new lite::cv::face::attr::EmotionFerPlus(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Age(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Gender(onnx_path);
auto *attribute = new lite::cv::face::attr::EfficientEmotion7(onnx_path); // 7 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::EfficientEmotion8(onnx_path); // 8 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::MobileEmotion7(onnx_path); // 7 emotions
auto *attribute = new lite::cv::face::attr::ReXNetEmotion7(onnx_path); // 7 emotions
3.5 图像分类应用案例

3.5 使用DenseNet 进行图像分类。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/densenet121.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_densenet.jpg";

  auto *densenet = new lite::cv::classification::DenseNet(onnx_path);

  lite::types::ImageNetContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  densenet->detect(img_bgr, content);
  if (content.flag)
  {
    const unsigned int top_k = content.scores.size();
    if (top_k > 0)
    {
      for (unsigned int i = 0; i < top_k; ++i)
        std::cout << i + 1
                  << ": " << content.labels.at(i)
                  << ": " << content.texts.at(i)
                  << ": " << content.scores.at(i)
                  << std::endl;
    }
  }
  delete densenet;
}

输出的结果是:

图像分类更多可用的算法包括:

auto *classifier = new lite::cv::classification::EfficientNetLite4(onnx_path);  
auto *classifier = new lite::cv::classification::ShuffleNetV2(onnx_path); 
auto *classifier = new lite::cv::classification::GhostNet(onnx_path);
auto *classifier = new lite::cv::classification::HdrDNet(onnx_path);
auto *classifier = new lite::cv::classification::IBNNet(onnx_path);
auto *classifier = new lite::cv::classification::MobileNetV2(onnx_path); 
auto *classifier = new lite::cv::classification::ResNet(onnx_path); 
auto *classifier = new lite::cv::classification::ResNeXt(onnx_path);
3.6 人脸检测应用案例

3.6 使用UltraFace 进行人脸检测。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ultraface-rfb-640.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_ultraface.jpg";
  std::string save_img_path = "../../../logs/test_lite_ultraface.jpg";

  auto *ultraface = new lite::cv::face::detect::UltraFace(onnx_path);

  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  ultraface->detect(img_bgr, detected_boxes);
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);

  delete ultraface;
}

输出的结果是:

人脸检测更多可用的算法包括:

auto *detector = new lite::face::detect::UltraFace(onnx_path);  // 1.1Mb only !
auto *detector = new lite::face::detect::FaceBoxes(onnx_path);  // 3.8Mb only ! 
auto *detector = new lite::face::detect::RetinaFace(onnx_path);  // 1.6Mb only ! CVPR2020
3.7 图像着色应用案例

3.7 使用colorization 进行图像着色。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/eccv16-colorizer.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_eccv16_colorizer_1.jpg";
  
  auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
  
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::types::ColorizeContent colorize_content;
  colorizer->detect(img_bgr, colorize_content);
  
  if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat);
  delete colorizer;
}

输出的结果是:


3.8 头部姿态估计应用案例

3.8 使用FSANet 进行头部姿态估计。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/fsanet-var.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_fsanet.jpg";
  std::string save_img_path = "../../../logs/test_lite_fsanet.jpg";

  auto *fsanet = new lite::cv::face::pose::FSANet(onnx_path);
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::types::EulerAngles euler_angles;
  fsanet->detect(img_bgr, euler_angles);
  
  if (euler_angles.flag)
  {
    lite::utils::draw_axis_inplace(img_bgr, euler_angles);
    cv::imwrite(save_img_path, img_bgr);
    std::cout << "yaw:" << euler_angles.yaw << " pitch:" << euler_angles.pitch << " row:" << euler_angles.roll << std::endl;
  }
  delete fsanet;
}

输出的结果是:

3.9 人脸关键点检测应用案例

3.9 使用FaceLandmarks1000 进行人脸1000关键点检测。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/FaceLandmark1000.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
  std::string save_img_path = "../../../logs/test_lite_face_landmarks_1000.jpg";
    
  auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);

  lite::types::Landmarks landmarks;
  cv::Mat img_bgr = cv::imread(test_img_path);
  face_landmarks_1000->detect(img_bgr, landmarks);
  lite::utils::draw_landmarks_inplace(img_bgr, landmarks);
  cv::imwrite(save_img_path, img_bgr);
  
  delete face_landmarks_1000;
}

输出的结果是:

人脸关键点检测更多可用的算法包括:

auto *align = new lite::cv::face::align::PFLD(onnx_path);  // 106 landmarks
auto *align = new lite::cv::face::align::PFLD98(onnx_path);  // 98 landmarks
auto *align = new lite::cv::face::align::PFLD68(onnx_path);  // 68 landmarks
auto *align = new lite::cv::face::align::MobileNetV268(onnx_path);  // 68 landmarks
auto *align = new lite::cv::face::align::MobileNetV2SE68(onnx_path);  // 68 landmarks
auto *align = new lite::cv::face::align::FaceLandmark1000(onnx_path);  // 1000 landmarks !
3.10 风格迁移应用案例

3.10 使用FastStyleTransfer 进行自然风格迁移。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/style-candy-8.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_fast_style_transfer.jpg";
  std::string save_img_path = "../../../logs/test_lite_fast_style_transfer_candy.jpg";
  
  auto *fast_style_transfer = new lite::cv::style::FastStyleTransfer(onnx_path);
 
  lite::types::StyleContent style_content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  fast_style_transfer->detect(img_bgr, style_content);

  if (style_content.flag) cv::imwrite(save_img_path, style_content.mat);
  delete fast_style_transfer;
}

输出的结果是:


3.11 抠图应用案例

3.11 使用RobustVideoMatting 进行视频抠图。请从Model-Zoo2 下载模型文件。

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/rvm_mobilenetv3_fp32.onnx";
  std::string video_path = "../../../examples/lite/resources/test_lite_rvm_0.mp4";
  std::string output_path = "../../../logs/test_lite_rvm_0.mp4";
  
  auto *rvm = new lite::cv::matting::RobustVideoMatting(onnx_path, 16); // 16 threads
  std::vector<lite::types::MattingContent> contents;
  
  // 1. video matting.
  rvm->detect_video(video_path, output_path, contents);
  
  delete rvm;
}

输出的结果是:


4. Lite.AI.ToolKit API文档

4.1 默认版本的APIs.

更多默认版本的API文档,详见 api.default.md 。 比如,YoloV5的API是:

lite::cv::detection::YoloV5

void detect(const cv::Mat &mat, std::vector<types::Boxf> &detected_boxes, 
            float score_threshold = 0.25f, float iou_threshold = 0.45f,
            unsigned int topk = 100, unsigned int nms_type = NMS::OFFSET);
ONNXRuntime,MNN, NCNN 和 TNN 版本的APIs.

4.2 ONNXRuntime 版本 APIs.

更多ONNXRuntime版本的API文档,详见 api.onnxruntime.md 。比如,YoloV5的API是:

lite::onnxruntime::cv::detection::YoloV5

void detect(const cv::Mat &mat, std::vector<types::Boxf> &detected_boxes, 
            float score_threshold = 0.25f, float iou_threshold = 0.45f,
            unsigned int topk = 100, unsigned int nms_type = NMS::OFFSET);

4.3 MNN 版本 APIs.

(todo⚠️: 待实现)

lite::mnn::cv::detection::YoloV5

lite::mnn::cv::detection::YoloV4

lite::mnn::cv::detection::YoloV3

lite::mnn::cv::detection::SSD

...

4.4 NCNN 版本 APIs.

(todo⚠️: 待实现)

lite::ncnn::cv::detection::YoloV5

lite::ncnn::cv::detection::YoloV4

lite::ncnn::cv::detection::YoloV3

lite::ncnn::cv::detection::SSD

...

4.5 TNN 版本 APIs.

(todo⚠️: 待实现)

lite::tnn::cv::detection::YoloV5

lite::tnn::cv::detection::YoloV4

lite::tnn::cv::detection::YoloV3

lite::tnn::cv::detection::SSD

...

5. 其他文档

展开其他文档

5.1 ONNXRuntime相关的文档.

5.2 third_party 相关的文档

Library Target Docs
OpenCV mac-x86_64 opencv-mac-x86_64-build.zh.md
OpenCV android-arm opencv-static-android-arm-build.zh.md
onnxruntime mac-x86_64 onnxruntime-mac-x86_64-build.zh.md
onnxruntime android-arm onnxruntime-android-arm-build.zh.md
NCNN mac-x86_64 todo⚠️
MNN mac-x86_64 todo⚠️
TNN mac-x86_64 todo⚠️

6. 开源协议

Lite.AI.ToolKit 的代码采用GPL-3.0协议。

7. 引用参考

本项目参考了以下开源项目。

展开更多引用参考

8. 引用

如果您在自己的项目中使用了Lite.AI.ToolKit,可考虑按以下方式进行引用。

@misc{lite.ai.toolkit2021,
  title={lite.ai.toolkit: A lite C++ toolkit of awesome AI models.},
  url={https://github.com/DefTruth/lite.ai.toolkit},
  note={Open-source software available at https://github.com/DefTruth/lite.ai.toolkit},
  author={Yan Jun},
  year={2021}
}

9. 告知

如果有你感兴趣的模型希望被Lite.AI.ToolKit🚀🚀🌟支持,你可以fork这个repo并修改TODOLIST.md ,然后提交PR~ 我会review这个PR,并在未来尝试支持这个模型,但不确保能完成。另外,未来会增加一些模型的MNNNCNNTNN 支持,但由于算子兼容等原因,也无法确保所有被ONNXRuntime C++ 支持的模型能够在MNNNCNNTNN 下跑通。所以,如果您想使用本项目支持的所有模型,并且不在意1~2ms的性能差距的话,请使用ONNXRuntime版本的实现。ONNXRuntime 是本仓库默认的推理引擎。但是如果你确实希望编译支持MNNNCNNTNN 支持的Lite.AI.ToolKit🚀🚀🌟动态库,你可以按照以下的步骤进行设置(⚠️目前不稳定,只支持少数模型,不推荐开启MNN、NCNN或TNN选项!🤦‍️)

  • build.sh中添加DENABLE_MNN=ONDENABLE_NCNN=ONDENABLE_TNN=ON,比如
cd build && cmake \
  -DCMAKE_BUILD_TYPE=MinSizeRel \
  -DINCLUDE_OPENCV=ON \   # 是否打包OpenCV进lite.ai.toolkit,默认ON;否则,你需要单独设置OpenCV
  -DENABLE_MNN=ON \       # 是否编译MNN版本的模型, 默认OFF,目前只支持部分模型
  -DENABLE_NCNN=OFF \     # 是否编译NCNN版本的模型,默认OFF,目前只支持部分模型
  -DENABLE_TNN=OFF \      # 是否编译TNN版本的模型, 默认OFF,目前只支持部分模型
  .. && make -j8
  • 使用MNN、NCNN或TNN版本的接口,详见案例demo ,比如
auto *nanodet = new lite::mnn::cv::detection::NanoDet(mnn_path);
auto *nanodet = new lite::tnn::cv::detection::NanoDet(proto_path, model_path);
auto *nanodet = new lite::ncnn::cv::detection::NanoDet(param_path, bin_path);

10. 关联项目