This project focuses on an autonomous drone landing system that uses visual and distance-based sensors. The primary objective is to enable the drone to detect a landing pad with a downward-facing RGB camera and calculate the distance from the ground using a VL53 distance sensor. The system leverages advanced image processing and control mechanisms to ensure a safe and accurate landing.
- Autonomous Landing Pad Detection: Utilizes an RGB camera and image processing techniques.
- Distance Measurement: Employs a VL53 sensor for real-time altitude assessment.
- Landing Control: Integrates Proportional-Integral-Derivative (PID) control for dynamic throttle adjustments and precise positioning.
- Robust Detection: Combines YOLO (for object detection), OpenCV (for real-time image processing), and OCR (Tesseract) for identifying the landing pad and its "H" symbol.
- Simulation Support: Includes Unity-based simulations for testing and training, compatible with macOS, Linux, and Windows, and leverages
mlagents_envs
for seamless integration of Unity simulations into Python workflows.
Here is example of the landing process captured during testing:
Follow these steps to set up the project:
-
Clone the Repository:
git clone https://github.com/Oneiben/autonomous-drone-landing-system.git cd autonomous-drone-landing-system
-
Install Dependencies: Make sure you have Python 3.10.12 installed, then install the required packages using:
pip install -r requirements.txt
autonomous-drone-landing-system/
├── Media/
│ ├── landing_progress_gifs/ # GIFs showing the landing process from different angles
│ │ ├── Downward.gif
│ │ └── Main.gif
│ ├── landing_pad_images/ # Images of the landing pad
│ │ ├── landing_pad.png
│ │ └── LandingPad.jpg
├── models/ # YOLO model weights
├── src/ # Python scripts for the landing system and utilities
│ ├── control_actions.py
│ ├── image_processing.py
│ ├── main.py
│ └── simulation.py
├── tests/ # Unit tests for validating components
│ ├── ip_test_cv2.py
│ ├── ip_test_pytesseract.py
│ └── ip_test_yolo.py
├── LICENSE
├── README.md
└── requirements.txt # Python Dependencies
-
Set up your simulation environment: If you want to test in simulation, you can check out this simulation and build a Unity simulation.🔗 Quadrotor Simulation
Once you build the Unity simulation, note the path to the build file. You’ll need this path when running the main script.
-
Launch the main script to start the landing system:
python src/main.py <path-to-your-simulation-build>
-
Test Detection Methods: Use the test scripts in the 📂 tests folder to validate the detection methods with a webcam. YOLO models located in 📂 models are used for detecting two landing pads. Ensure the model names align with the images provided in the 📂 landing_pad_images directory.
- Example for testing YOLO:
python tests/ip_test_Yolo.py
- Example for testing YOLO:
- Python: Core language for development.
- YOLO: Object detection for landing pad recognition.
- OpenCV: Image processing library.
- Tesseract OCR: Letter detection for identifying the "H" symbol.
- VL53 Distance Sensor: Measures altitude.
- PID Control: Ensures precise and smooth adjustments for safe landing.
- Unity: Used for creating and running drone landing simulations.
- mlagents_envs: Provides a
UnityEnvironmentWrapper
for interfacing Python with Unity simulations.
Contributions are welcome! If you have suggestions or improvements, feel free to fork the repository and create a pull request.
- Fork the repository.
- Create a new branch:
git checkout -b feature-name
- Commit your changes:
git commit -m "Description of changes"
- Push the changes and open a pull request.
This project is licensed under the MIT License. See the 📜 LICENSE file for more details.