Replies: 3 comments
-
👋 Hello @sh1man, thank you for your interest in Ultralytics 🚀! We recommend checking out the Ultralytics documentation for helpful resources, including usage examples and guidance for CLI and Python API operations. These resources may provide insights relevant to your inquiry. If this is a 🐛 Bug Report, please include a minimum reproducible example so we can better assist you with debugging. If this is a custom training ❓ Question, try to include as much detail as possible, such as dataset examples, training logs, and other context. Also, ensure you are following our tips for achieving the best training results. Join our vibrant community through Discord, Discourse, or Reddit for additional support and collaboration opportunities. These spaces are ideal for engaging in-depth discussions or accessing real-time assistance. UpgradeYou can ensure you're working with the latest pip install -U ultralytics EnvironmentsUltralytics models, including YOLO, can be run in various up-to-date verified environments. Options include free GPU-based notebooks, cloud computing solutions, or Docker images. These environments come preconfigured with dependencies like Python, CUDA, CUDNN, and PyTorch, making it easier for you to deploy models in production. StatusIf you need information regarding CI testing, the status badges indicate whether current tests for all YOLO modes and tasks are passing. Ultralytics CI tests cover macOS, Windows, and Ubuntu daily and on every code update. 📢 Please note that this is an automated response to help guide you toward relevant resources. An Ultralytics engineer will review your discussion thread and provide additional support soon. |
Beta Was this translation helpful? Give feedback.
-
@sh1man yes, using multiprocessing with each process loading its own YOLO model instance is recommended for production, as it avoids shared state issues and can increase productivity by leveraging independent processes. You can refer to our guide on thread-safe inference for more details: https://docs.ultralytics.com/guides/yolo-thread-safe-inference/. |
Beta Was this translation helpful? Give feedback.
-
You should load the model using something like Triton Inference Server, so that you don't have to load it in every process. https://docs.ultralytics.com/guides/triton-inference-server/ |
Beta Was this translation helpful? Give feedback.
-
I am using multiple processors for my program, does it make sense for the model to load a different one into each process ?
will my productivity increase?
Beta Was this translation helpful? Give feedback.
All reactions