Skip to content

jakub-ml/digital-twin

Repository files navigation

Robot Localization in a Warehouse Using Computer Vision Algorithms

Project Description

This project involved developing a robot localization module for warehouse environments using computer vision algorithms and images from two cameras. The goal was to determine the robot's position using object detection methods such as YOLO v8 and Haar Cascade Classifier, and to transform the perspective to localize the robot within the warehouse space.

Camera Detection

Project Goal

This project was carried out as part of the IDS Industrial Data Science student research group at AGH University of Science and Technology. The main objective was to create a system that enables robot localization based on images from two cameras. It was implemented as a component of a digital twin of a warehouse.

Camera Detection

Technologies Used

  • YOLO v8 – real-time object detection algorithm with high accuracy and speed.
  • Haar Cascade Classifier – machine learning-based object detection method.
  • Cameras – two cameras used to collect real-time image data.
  • Jetson Nano - mobile robot
  • OpenCV – library for image and video processing.

Fisheye Effect

The fisheye effect, common in wide-angle industrial cameras, can be corrected using libraries like OpenCV by calibrating the camera with images of a known pattern, such as a chessboard. This process involves calculating distortion coefficients and the camera matrix, which are then used to undistort the image. Correcting fisheye distortion significantly improves object localization accuracy in vision-based systems like robotics or industrial monitoring.

Achieved Goals

  1. Robot Detection – detecting the robot in camera images using YOLO v8 and Haar Cascade Classifier.
  2. Robot Localization – determining the robot's position in the warehouse space based on detections.
  3. Perspective Transformation – image transformation to obtain accurate robot positioning in 3D space.

Camera Detection

Detection Algorithms

YOLO v8

  • Detection Accuracy: ~95% on the training dataset.
  • Average Detection Time: 0.05 seconds.
  • Advantages: Very fast, real-time, highly accurate.
  • Use Case: Used for detecting the robot and other warehouse objects.

Camera Detection

Haar Cascade Classifier

  • Detection Accuracy: ~97% on the training dataset.
  • Average Detection Time: 1.5 seconds.
  • Advantages: Faster image processing, lower accuracy than YOLO.
  • Use Case: Used as an alternative to YOLO where low detection time is essential.

Camera Detection

Results

  • Localization Accuracy: High accuracy achieved with both YOLO v8 and Haar Cascade Classifier.
  • Scalability: The system can be easily scaled to different warehouse sizes and camera setups.

Conclusions

  • YOLO v8 offers excellent detection accuracy but requires more computing power.
  • Haar Cascade Classifier is faster but slightly less accurate.
  • Both algorithms enable real-time robot localization and the system is scalable.

Project Summary

  • Developed a camera-based robot detection system.
  • Localized the robot in warehouse space using computer vision.
  • Compared performance between YOLO v8 and Haar Cascade Classifier.

Documentation & References

License

This project is licensed under the MIT License.

Contact

  • Supervisor: Dr. Eng. Waldemar Bauer
  • Team: Jakub Mieszczak, Konrad Golemo, Bartłomiej Gawęda

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published