Welcome to the Ultralytics YOLO Flutter plugin! Integrate cutting-edge Ultralytics YOLO computer vision models seamlessly into your Flutter mobile applications. This plugin supports both Android and iOS platforms, offering APIs for object detection and image classification.
Feature | Android | iOS |
---|---|---|
Detection | โ | โ |
Classification | โ | โ |
Pose Estimation | โ | โ |
Segmentation | โ | โ |
OBB Detection | โ | โ |
Before proceeding or reporting issues, please ensure you have read this documentation thoroughly.
This Ultralytics YOLO plugin is specifically designed for mobile platforms, targeting iOS and Android apps. It leverages Flutter Platform Channels for efficient communication between the client (your app/plugin) and the host platform (Android/iOS), ensuring seamless integration and responsiveness. All intensive processing related to Ultralytics YOLO APIs is handled natively using platform-specific APIs, with this plugin acting as a bridge.
Before integrating Ultralytics YOLO into your app, you must export the necessary models. The export process generates .tflite
(for Android) and .mlmodel
(for iOS) files, which you'll include in your app. Use the Ultralytics YOLO Command Line Interface (CLI) for exporting.
IMPORTANT: The parameters specified in the commands below are mandatory. This Flutter plugin currently only supports models exported using these exact commands. Using different parameters may cause the plugin to malfunction. We are actively working on expanding support for more models and parameters.
Use the following commands to export the required models:
Android
Export the YOLOv8n detection model:
yolo export format=tflite model=yolov8n imgsz=320 int8
Export the YOLOv8n-cls classification model:
yolo export format=tflite model=yolov8n-cls imgsz=320 int8
After running the commands, use the generated yolov8n_int8.tflite
or yolov8n-cls_int8.tflite
file in your Android project.
iOS
Export the YOLOv8n detection model for iOS:
yolo export format=mlmodel model=yolov8n imgsz=[320, 192] half nms
Use the resulting .mlmodel
file in your iOS project.
After exporting the models, include the generated .tflite
and .mlmodel
files in your Flutter app's assets
folder. Refer to the Flutter documentation on adding assets for guidance.
Ensure your application requests the necessary permissions to access the camera and storage.
Android
Add the following permissions to your AndroidManifest.xml
file, typically located at android/app/src/main/AndroidManifest.xml
. Consult the Android developer documentation for more details on permissions.
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
iOS
Add the following keys with descriptions to your Info.plist
file, usually found at ios/Runner/Info.plist
. See Apple's documentation on protecting user privacy for more information.
<key>NSCameraUsageDescription</key>
<string>Camera permission is required for object detection.</string>
<key>NSPhotoLibraryUsageDescription</key>
<string>Storage permission is required for object detection.</string>
Additionally, modify your Podfile
(located at ios/Podfile
) to include permission configurations for permission_handler
:
post_install do |installer|
installer.pods_project.targets.each do |target|
flutter_additional_ios_build_settings(target)
# Start of the permission_handler configuration
target.build_configurations.each do |config|
config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] ||= [
'$(inherited)',
## dart: PermissionGroup.camera
'PERMISSION_CAMERA=1',
## dart: PermissionGroup.photos
'PERMISSION_PHOTOS=1',
]
end
# End of the permission_handler configuration
end
end
Instantiate a predictor object using the LocalYoloModel
class. Provide the necessary parameters:
// Define the model configuration
final model = LocalYoloModel(
id: 'yolov8n-detect', // Unique identifier for the model
task: Task.detect, // Specify the task (detect or classify)
format: Format.tflite, // Specify the model format (tflite or coreml)
modelPath: 'assets/models/yolov8n_int8.tflite', // Path to the model file in assets
metadataPath: 'assets/models/metadata.yaml', // Path to the metadata file (if applicable)
);
Create and load an ObjectDetector
:
// Initialize the ObjectDetector
final objectDetector = ObjectDetector(model: model);
// Load the model
await objectDetector.loadModel();
Create and load an ImageClassifier
:
// Initialize the ImageClassifier (adjust model details accordingly)
final imageClassifier = ImageClassifier(model: model); // Ensure 'model' is configured for classification
// Load the model
await imageClassifier.loadModel();
Use the UltralyticsYoloCameraPreview
widget to display the live camera feed and overlay prediction results.
// Create a camera controller
final _controller = UltralyticsYoloCameraController();
// Add the preview widget to your UI
UltralyticsYoloCameraPreview(
predictor: objectDetector, // Pass your initialized predictor (ObjectDetector or ImageClassifier)
controller: _controller, // Pass the camera controller
// Optional: Display a loading indicator while the model loads
loadingPlaceholder: Center(
child: Wrap(
direction: Axis.vertical,
crossAxisAlignment: WrapCrossAlignment.center,
children: [
const CircularProgressIndicator(
color: Colors.white,
strokeWidth: 2,
),
const SizedBox(height: 20),
Text(
'Loading model...',
// style: theme.typography.base.copyWith( // Adapt styling as needed
// color: Colors.white,
// fontSize: 14,
// ),
),
],
),
),
// Add other necessary parameters like onCameraCreated, onCameraInitialized, etc.
)
Perform predictions on static images using the detect
or classify
methods.
// Perform object detection on an image file
final detectionResults = await objectDetector.detect(imagePath: 'path/to/your/image.jpg');
or
// Perform image classification on an image file
final classificationResults = await imageClassifier.classify(imagePath: 'path/to/your/image.jpg');
Ultralytics thrives on community collaboration, and we deeply value your contributions! Whether it's bug fixes, feature enhancements, or documentation improvements, your involvement is crucial. Please review our Contributing Guide for detailed insights on how to participate. We also encourage you to share your feedback through our Survey. A heartfelt thank you ๐ goes out to all our contributors!
Ultralytics offers two licensing options to accommodate diverse needs:
- AGPL-3.0 License: Ideal for students, researchers, and enthusiasts passionate about open-source collaboration. This OSI-approved license promotes knowledge sharing and open contribution. See the LICENSE file for details.
- Enterprise License: Designed for commercial applications, this license permits seamless integration of Ultralytics software and AI models into commercial products and services, bypassing the open-source requirements of AGPL-3.0. For commercial use cases, please inquire about an Enterprise License.
Encountering issues or have feature requests related to Ultralytics YOLO? Please report them via GitHub Issues. For broader discussions, questions, and community support, join our Discord server!