Skip to content

albumentations-team/benchmark

Repository files navigation

Image and Video Augmentation Library Benchmarks

A comprehensive benchmarking suite for comparing the performance of popular image and video augmentation libraries including Albumentations, imgaug, torchvision, Kornia, and Augly.

Table of Contents

Overview

This benchmark suite measures the throughput and performance characteristics of common augmentation operations across different libraries. It features:

  • Benchmarks for both image and video augmentation
  • Adaptive warmup to ensure stable measurements
  • Multiple runs for statistical significance
  • Detailed performance metrics and system information
  • Thread control settings for consistent performance
  • Support for multiple image/video formats and loading methods

Benchmark Types

Image Benchmarks

The image benchmarks compare the performance of various libraries on standard image transformations. All benchmarks are run on a single CPU thread to ensure consistent and comparable results.

Detailed Image Benchmark Results

Image Speedup Analysis

Video Benchmarks

The video benchmarks compare CPU-based processing (Albumentations) with GPU-accelerated processing (Kornia) for video transformations. The benchmarks use the UCF101 dataset, which contains realistic videos from 101 action categories.

Detailed Video Benchmark Results

Video Speedup Analysis

Performance Highlights

Image Augmentation Performance

Albumentations is generally the fastest library for image augmentation, with a median speedup of 4.1× compared to other libraries. For some transforms, the speedup can be as high as 119.7× (MedianBlur).

Video Augmentation Performance

For video processing, the performance comparison between CPU (Albumentations) and GPU (Kornia) shows interesting trade-offs. While GPU acceleration provides significant benefits for complex transformations, CPU processing can be more efficient for simple operations.

Requirements

The benchmark automatically creates isolated virtual environments for each library and installs the necessary dependencies. Base requirements:

  • Python 3.10+
  • uv (for fast package installation)
  • Disk space for virtual environments
  • Image/video dataset in a supported format

Supported Libraries

Each library's specific dependencies are managed through separate requirements files in the requirements/ directory.

Setup

Getting Started

For testing and comparison purposes, you can use standard datasets:

For image benchmarks:

wget https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_val.tar
tar -xf ILSVRC2012_img_val.tar -C /path/to/your/target/directory

For video benchmarks:

# UCF101 dataset
wget https://www.crcv.ucf.edu/data/UCF101/UCF101.rar
unrar x UCF101.rar -d /path/to/your/target/directory

Using Your Own Data

We strongly recommend running the benchmarks on your own dataset that matches your use case:

  • Use images/videos that are representative of your actual workload
  • Consider sizes and formats you typically work with
  • Include edge cases specific to your application

This will give you more relevant performance metrics for your specific use case.

Running Benchmarks

Running Image Benchmarks

To benchmark a single library:

./run_single.sh -l albumentations -d /path/to/images -o /path/to/output

To run benchmarks for all supported libraries and generate a comparison:

./run_all.sh -d /path/to/images -o /path/to/output --update-docs

Running Video Benchmarks

To benchmark a single library:

./run_video_single.sh -l albumentations -d /path/to/videos -o /path/to/output

To run benchmarks for all supported libraries and generate a comparison:

./run_video_all.sh -d /path/to/videos -o /path/to/output --update-docs

Methodology

The benchmark methodology is designed to ensure fair and reproducible comparisons:

  1. Data Loading: Data is loaded using library-specific loaders to ensure optimal format compatibility
  2. Warmup Phase: Adaptive warmup until performance variance stabilizes
  3. Measurement Phase: Multiple runs with statistical analysis
  4. Environment Control: Consistent thread settings and hardware utilization

For detailed methodology, see the specific benchmark READMEs:

Contributing

Contributions are welcome! If you'd like to add support for a new library, improve the benchmarking methodology, or fix issues, please submit a pull request.

When contributing, please:

  1. Follow the existing code style
  2. Add tests for new functionality
  3. Update documentation as needed
  4. Ensure all tests pass

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

 

Packages

No packages published