Skip to content

The dataset and code corresponding to the paper of "Seeing the Unseen: Semantic Segmentation and Uncertainty Quantification for Delamination Detection in Building Facades"

License

Notifications You must be signed in to change notification settings

YGTong/Seeing-the-Unseen

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Seeing-the-Unseen

The dataset and code corresponding to the paper of "Seeing the Unseen: Semantic Segmentation and Uncertainty Quantification for Delamination Detection in Building Facades"

📦 Data and Code Availability

BFDS-2025 Dataset

The BFDS-2025 dataset is partially released in this repository to support transparency and reproducibility of the reported research.
Due to data volume, annotation complexity, and ongoing research extensions, the full dataset is available upon reasonable academic request.

Access conditions for the full BFDS-2025 dataset:

  • The dataset is provided for non-commercial academic research only.
  • Interested researchers are required to contact the authors directly to request access.
  • Requests should briefly describe the intended research purpose.
  • Upon approval, access to the complete dataset will be granted through a private distribution channel.
  • All relevant citations must indicate their sources and reference this paper.
  • Redistribution of the dataset without permission is not allowed.
  • Any derivative datasets or benchmarks should clearly state that they are based on BFDS-2025.

Contact for data access:
Please request the full dataset by contacting the author via GitHub Issues (preferred) or direct message/email, as indicated in this repository. Contact Email:[email protected]

Code Availability

This repository provides the core implementation used in the paper, including:

  • Model architecture definitions
  • Training and inference scripts
  • Evaluation and visualization utilities

The released code is sufficient to reproduce the main experimental results reported in the manuscript using the publicly available subset of the data.
Additional scripts, configurations, or pretrained models related to extended experiments may be provided upon reasonable academic request, consistent with the dataset access policy.


🚀 Usage

This section describes the basic workflow for reproducing the main experiments reported in the paper using the released code and publicly available subset of BFDS-2025.

1. Environment Setup

We recommend using a virtual environment (e.g., Conda) with Python ≥ 3.8.

conda create -n tihsnet python=3.8
conda activate tihsnet
pip install -r requirements.txt

2. Dataset Preparation

Organize the BFDS-2025 dataset using the following directory structure:

BFDS-2025/
├── train/
│   ├── image_irt/    # Infrared thermal images
│   ├── image_rgb/    # Corresponding RGB images
│   ├── label/        # Annotation files
│   └── mask/         # Pixel-wise segmentation masks
├── val/
│   ├── image_irt/
│   ├── image_rgb/
│   ├── label/
│   └── mask/
└── test/
    ├── image_irt/
    ├── image_rgb/
    ├── label/
    └── mask/

Annotation Description

  • mask/
    Contains human-interpretable binary segmentation masks generated using LabelMe.
    These masks are stored as 2D images and visually indicate delaminated regions, where:

    • Foreground pixels represent defect regions.
    • Background pixels represent non-defective areas.
      The masks can be directly inspected by human annotators and are used as the primary visual ground truth.
  • label/
    Contains machine-readable label files derived from the corresponding masks.
    These labels are generated by converting the binary masks into indexed class maps (e.g., 0 for background, 1 for defect) and are not visually interpretable by humans.
    The label files are used as direct inputs to the training and evaluation pipelines for semantic segmentation.

The conversion from mask to label is performed using a deterministic RGB-to-index mapping, ensuring consistent class encoding across the dataset.

3. Experiment Configuration

Before model training and evaluation, key experimental settings can be adjusted by modifying the argument configurations defined in train.py and test.py.
These parameters control hardware usage, dataset paths, model architecture, training strategy, and output directories.

3.1 Hardware and Runtime Settings

The following arguments specify GPU usage and parallel data loading behavior:

  • --gpus
    GPU device ID(s) used for training and inference.

  • --num-workers
    Number of subprocesses used for data loading.

3.2 Dataset and Task Configuration

Dataset location and task-related settings can be configured as follows:

  • --datasetdir
    Root directory of the BFDS-2025 dataset.

  • --num-classes
    Number of segmentation classes (default: 2 for binary segmentation).

  • --input-size
    Input image resolution used for training and inference.

3.3 Training Hyperparameters

Key hyperparameters controlling the training process include:

  • --batch-size
    Batch size for model training.

  • --max-epoch
    Maximum number of training epochs.

  • --stop-epoch
    Epoch at which training is terminated (used for early stopping or controlled training length).

  • --interval-validate
    Frequency (in epochs) for validation during training.

  • --lr-model
    Learning rate for model optimization.

  • --dropout-rate
    Dropout rate used for Monte Carlo Dropout during training and inference.

  • --seed
    Random seed to ensure reproducibility.

3.4 Model Selection and Checkpointing

  • --model-name
    Specifies the network architecture.
    Supported options include:
    tihsnet, unet, unetplusplus, manet, linknet, fpn, pspnet, deeplabv3, deeplabv3plus, pan, a2fpn, abcnet, dinknet34, dscnet_pro, fasternet, newnetrrm, segnet, transunet, segformer-B5, u-netformer.

  • --resume
    Path to a checkpoint file for resuming training or performing evaluation.

3.5 Output and Logging Directories

Model outputs and experimental logs are saved according to predefined directory settings:

  • In train.py, training logs and checkpoints are saved to:
    logs_TIHSNet/
    
  • In test.py, evaluation results and visualization are saved to:
    results_TIHSNet/mc
    

4. Model Training

Train the model using the provided training script:

python train.py

5. Model Evaluation

Evaluate a trained model on the test set:

python test.py

🔁 Reproducibility Statement

  • All released resources have been verified for accessibility.
  • The repository includes clear instructions for environment setup, data organization, and experiment execution to support experimental reproducibility.
  • Minor numerical differences may occur due to hardware or framework version variations.”
  • If you encounter any issues accessing the resources or reproducing the results, please open a GitHub Issue.

📚 Citation and Acknowledgment Policy

If you use BFDS-2025, the provided code, or any derived resources in your research, please cite this paper.


⚖️ License

Code License

The source code in this repository is released under the MIT License. See the LICENSE file for full license text.

Dataset License

The BFDS-2025 dataset is NOT released under the MIT License.

  • The dataset is provided for non-commercial academic research purposes only.
  • Redistribution of the dataset without explicit permission is prohibited.
  • Access to the full dataset requires prior approval from the authors.
  • Any use of the dataset must properly cite the associated publication.

Please refer to the Data and Code Availability section for details on dataset access.

About

The dataset and code corresponding to the paper of "Seeing the Unseen: Semantic Segmentation and Uncertainty Quantification for Delamination Detection in Building Facades"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published