This repository contains a complete pipeline for processing 3D indoor scans, performing semantic segmentation using PointNet++, and simulating a robot navigating the environment.
- Data Processing: Merge multiple partial PLY scans into a single world frame.
- Noise Removal: Statistical outlier removal to clean sensor data.
- Deep Learning: Semantic segmentation via a PointNet++ architecture.
- Robot Simulation: A* pathfinding on a 2D occupancy grid with a walking humanoid animation.
- Python: 3.9, 3.10, or 3.11 (Recommended for Open3D 0.19 compatibility)
- Conda (Anaconda or Miniconda)
Create a dedicated environment using the provided environment.yml:
conda env create -f environment.yml
conda activate workshop_3dNote: If environment.yml fails or you prefer pip, use:
pip install numpy==2.0.2 open3d==0.19.0 torch==2.x matplotlib scipyThe project follows a sequential workflow:
| Step | Script | Description |
|---|---|---|
| 1 | synthetic_generate.py |
Generate synthetic office/kitchen room data (creates scans/). |
| 2 | pointnet_train.py |
(Optional) Train the segmentation model on the generated data. |
| 3 | data_processing.py |
Main Pipeline: Load, merge, clean, and segment the scene. |
| 4 | robot_emulator.py |
Launch the navigation simulation to the fridge. |
python data_processing.py
python robot_emulator.pyVerify the pipeline logic with automated unit tests (requires pytest):
pytest tests/data_processing.py: Student script for processing point clouds.pointnet_train.py: Model architecture and training loop.robot_emulator.py: Navigation logic and 3D visualization.scans/: Input directory containing.plyscans andcamera_poses.json.tests/: Unit tests for data processing and model helpers.pointnet_weights.pth: Pre-trained weights for the PointNet++ model.scene_result.npz: Output of the pipeline (merged points + labels).
| ID | Class | Color |
|---|---|---|
| 0 | Floor | Light Grey |
| 1 | Wall | Beige |
| 2 | Ceiling | Pale Blue |
| 3 | Table | Brown |
| 4 | Chair | Green |
| 5 | Fridge | Blue |
| 6 | Sofa | Red |