This is the official repository of our PPE detection application for construction safety which is part of the whole system in our capstone project. It analyzes the detected PPE from the camera stream and evaluates the violations of each person present using an object detection algorithm called YOLOR. When obtaining the output from the detection, it will be consolidated as a payload for the clients to be received via the lightweight messaging protocol called MQTT. These clients are safety officers because they are the people who have the authority and responsibility within the area regarding the safety of the people.
Zeus James Baltazar Intelligent Systems |
Martin Lorenzo Basbacio Data Science |
Clarence Gail Larrosa Intelligent Systems |
Ian Gabriel Marquez System Administration |
Our team is a dynamic group, encompassing a wide range of skills, expertise, and backgrounds that collectively drive our project forward. The dedication of each member to their designated role results in great productivity and a remarkable team.
Zeus James Baltazar (zEuS0390)
He is the lead developer and focuses mostly on the utilization of ideas brought on by the team.
Martin Lorenzo Basbacio (mahteenbash)
He facilitates the methods regarding data science concepts such as object detection.
Clarece Gail Larrosa (clarencelarrosa)
She mostly manages and maintains the documentation of the project.
Ian Gabriel Marquez (ianmarquez1129)
He handles the development of the mobile application and its UI/UX design.
- Mainly cloud-based but can be utilized into a local-based by configuration
- Detects 5 basic PPE for construction safety and another 5 without wearing them
- Detects a human person to determine worker violations
- Delivers real-time reports for the mobile application via MQTT
- Retains data in storage and database for future processing
The flowchart illustrates how the input flows throughout the system. It starts from capturing the video stream from the Raspberry Pi until receiving the outputs from the detection for the users.
The diagram outlines the interactions between various entities for the mobile application. Basically, the hardware device captures the real-time video stream and sends it into a service provided by the Amazon Web Service (AWS) cloud platform.
We used a Raspberry Pi 4 Model B and an OKdo camera module in this project. Additionally, we included some minor components to help the user determine the status of the device and physically turn it off. These components are: an RGB LED light, a piezo buzzer, and a tactile switch. A tripod can also be attached and detached to the bottom of the enclosure to adjust the angle of the camera, which aids the mobility of the device. Ventilation also plays a vital role in every hardware. Therefore, in the enclosure, the exhausts are found at the top, right, and left.
Raspberry Pi Specifications
CPU | BCM2835 ARM Quad-Core 64-bit @ 1.8GHz |
OS | Debian GNU/Linux 11 (bullseye) aarch64 |
RAM | 8 GB |
OKdo Camera Module Specifications
Sensor | 5MP OV5647 |
Resolution | 1080p |
FPS | 30 |
The trained model covers eleven classes. It can detect compliant and noncompliant PPE for construction, it can also detect persons which aids the application to determine particular violations.
- Helmet
- No Helmet
- Glasses
- No Glasses
- Vest
- No Vest
- Gloves
- No Gloves
- Boots
- No Boots
- Person
The model was trained using these datasets with different image augmentation. The custom datasets used in this project were pre-processed and thoroughly scanned and labeled following the classes mentioned above. Images were resized to 640x640 and provided with different augmentations. The final datasets are split into three different sets: 70% training Set, 20% validation set, and 10% testing set.
The datasets are available in these links:
NOTE: Disregard this section if the application is set to the cloud environment. This is only applicable to the local environment as it uses LAN-based broker to communicate with other devices.
To get started, install the required dependencies. It is highly recommended to use virtual environment (Pipenv, Virtualenv) to isolate them to the system.
There are some external dependencies that are not included in the script. Download and install them first before continuing to the next step.
After that, just run this script and it will handle the installation.
./scripts/linux/install.sh
Download and install mosquitto from https://mosquitto.org/download/. Make sure to set it as a service to automatically start itself during system startup. There should be username and password set up in the configuration by creating a password file (e.g. passwd_file
):
# Uncomment the following values in this configuration file: mosquitto.conf
listener 1883
password_file passwd_file
allow_anonymous false
To create a username and password, enter this command:
# Note: `admin` is a username, you can change it if you want
# After entering this command, it will prompt for a password
mosquitto_passwd passwd_file admin
This is optional, but if you want to run the broker manually instead of a service, then run this command:
mosquitto -c mosquitto.conf -v
After conducting tests at various construction sites to evaluate the system's quality, we have successfully achieved all of our objectives, especially in terms of detection accuracy.
- PPE Detection System for Construction Site Safety Monitoring Test (Local-based) #1
- PPE Detection System for Construction Site Safety Monitoring Test (Local-based) #2
- PPE Detection System for Construction Site Safety Monitoring Test (Local-based) #3
- PPE Detection System for Construction Site Safety Monitoring Test (Local-based) #4
- PPE Detection System for Construction Site Safety Monitoring Test (Local-based) #5
- PPE Detection System for Construction Site Safety Monitoring Demo (Cloud-based with AWS) #1
- PPE Detection System for Construction Site Safety Monitoring Demo (Cloud-based with AWS) #2
- PPE Detection System for Construction Site Safety Monitoring Demo (Cloud-based with AWS) #3
- PPE Detection System for Construction Site Safety Monitoring Deployment Demo
- https://github.com/aws-samples/amazon-kinesis-video-streams-consumer-library-for-python/
- https://en.wikipedia.org/wiki/Representational_state_transfer/
- https://en.wikipedia.org/wiki/Light-emitting_diode/
- https://en.wikipedia.org/wiki/Bounding_volume/
- https://learn.microsoft.com/en-us/powershell/
- https://en.wikipedia.org/wiki/Push-button/
- https://www.django-rest-framework.org/
- https://github.com/WongKinYiu/yolor/
- https://en.wikipedia.org/wiki/Base64/
- https://en.wikipedia.org/wiki/Buzzer/
- https://www.gnu.org/software/bash/
- https://en.wikipedia.org/wiki/Linux/
- https://gstreamer.freedesktop.org/
- https://en.wikipedia.org/wiki/API/
- https://www.raspberrypi.com/
- https://www.sqlalchemy.org/
- https://www.virtualbox.org/
- https://www.paramiko.org/
- https://www.openssh.com/
- https://aws.amazon.com/
- https://www.python.org/
- https://www.sqlite.org/
- https://mosquitto.org/
- https://colab.google/
- https://git-scm.com/
- https://opencv.org/
- https://mqtt.org/