Skip to content

Oneiben/autonomous-drone-landing-system

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Autonomous Drone Landing System

Main Process Overview


Table of Contents


Overview

This project focuses on an autonomous drone landing system that uses visual and distance-based sensors. The primary objective is to enable the drone to detect a landing pad with a downward-facing RGB camera and calculate the distance from the ground using a VL53 distance sensor. The system leverages advanced image processing and control mechanisms to ensure a safe and accurate landing.

Key Features:

  • Autonomous Landing Pad Detection: Utilizes an RGB camera and image processing techniques.
  • Distance Measurement: Employs a VL53 sensor for real-time altitude assessment.
  • Landing Control: Integrates Proportional-Integral-Derivative (PID) control for dynamic throttle adjustments and precise positioning.
  • Robust Detection: Combines YOLO (for object detection), OpenCV (for real-time image processing), and OCR (Tesseract) for identifying the landing pad and its "H" symbol.
  • Simulation Support: Includes Unity-based simulations for testing and training, compatible with macOS, Linux, and Windows, and leverages mlagents_envs for seamless integration of Unity simulations into Python workflows.

Landing Process in Action

Here is example of the landing process captured during testing:

Downward Camera View

Downward Camera View


Installation

Follow these steps to set up the project:

  1. Clone the Repository:

    git clone https://github.com/Oneiben/autonomous-drone-landing-system.git
    cd autonomous-drone-landing-system
  2. Install Dependencies: Make sure you have Python 3.10.12 installed, then install the required packages using:

    pip install -r requirements.txt

Project Structure

autonomous-drone-landing-system/
├── Media/  
│   ├── landing_progress_gifs/  # GIFs showing the landing process from different angles
│   │   ├── Downward.gif
│   │   └── Main.gif
│   ├── landing_pad_images/     # Images of the landing pad
│   │   ├── landing_pad.png
│   │   └── LandingPad.jpg
├── models/                     # YOLO model weights
├── src/                        # Python scripts for the landing system and utilities
│   ├── control_actions.py
│   ├── image_processing.py
│   ├── main.py
│   └── simulation.py
├── tests/                      # Unit tests for validating components
│   ├── ip_test_cv2.py
│   ├── ip_test_pytesseract.py
│   └── ip_test_yolo.py
├── LICENSE 
├── README.md
└── requirements.txt            # Python Dependencies

Usage

  1. Set up your simulation environment: If you want to test in simulation, you can check out this simulation and build a Unity simulation.🔗 Quadrotor Simulation

    Once you build the Unity simulation, note the path to the build file. You’ll need this path when running the main script.

  2. Launch the main script to start the landing system:

    python src/main.py  <path-to-your-simulation-build>
  3. Test Detection Methods: Use the test scripts in the 📂 tests folder to validate the detection methods with a webcam. YOLO models located in 📂 models are used for detecting two landing pads. Ensure the model names align with the images provided in the 📂 landing_pad_images directory.

    • Example for testing YOLO:
      python tests/ip_test_Yolo.py

Technologies Used

  • Python: Core language for development.
  • YOLO: Object detection for landing pad recognition.
  • OpenCV: Image processing library.
  • Tesseract OCR: Letter detection for identifying the "H" symbol.
  • VL53 Distance Sensor: Measures altitude.
  • PID Control: Ensures precise and smooth adjustments for safe landing.
  • Unity: Used for creating and running drone landing simulations.
  • mlagents_envs: Provides a UnityEnvironmentWrapper for interfacing Python with Unity simulations.

Contributing

Contributions are welcome! If you have suggestions or improvements, feel free to fork the repository and create a pull request.

Steps to Contribute:

  1. Fork the repository.
  2. Create a new branch:
    git checkout -b feature-name
  3. Commit your changes:
    git commit -m "Description of changes"
  4. Push the changes and open a pull request.

License

This project is licensed under the MIT License. See the 📜 LICENSE file for more details.

About

an autonomous drone landing system that uses visual and distance-based sensors.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages