ort is a Rust interface for performing hardware-accelerated inference & training on machine learning models in the Open Neural Network Exchange (ONNX) format.
Based on the now-inactive onnxruntime-rs crate, ort is primarily a wrapper for Microsoft's ONNX Runtime library, but offers support for other pure-Rust runtimes.
ort with ONNX Runtime is super quick - and it supports almost any hardware accelerator you can think of. Even still, it's light enough to run on your users' devices.
When you need to deploy a PyTorch/TensorFlow/Keras/scikit-learn/PaddlePaddle model either on-device or in the datacenter, ort has you covered.
Open a PR to add your project here 🌟
- Koharu uses
ortto detect, OCR, and inpaint manga pages. - BoquilaHUB uses
ortfor local AI deployment in biodiversity conservation efforts. - Magika uses
ortfor content type detection. - Text Embeddings Inference (TEI) uses
ortto deliver high-performance ONNX runtime inference for text embedding models. sbv2-apiis a fast implementation of Style-BERT-VITS2 text-to-speech usingort.- CamTrap Detector uses
ortto detect animals, humans and vehicles in trail camera imagery. - oar-ocr A comprehensive OCR library, built in Rust with
ortfor efficient inference. - retto uses
ortfor reliable, fast ONNX inference of PaddleOCR models on Desktop and WASM platforms. - Ahnlich uses
ortto power their AI proxy for semantic search applications. - Valentinus uses
ortto provide embedding model inference inside LMDB. - edge-transformers uses
ortfor accelerated transformer model inference at the edge. FastEmbed-rsusesortfor generating vector embeddings, reranking locally.- Ortex uses
ortfor safe ONNX Runtime bindings in Elixir.
