Skip to content

Add differentiable modulation and demodulation methods for BPSK, QPSK… #10

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: dev
Choose a base branch
from

Conversation

selimfirat
Copy link
Member

Making Modulations Differentiable

Description

This PR introduces differentiable paths in the modulation and demodulation schemes within Kaira. This enables gradient-based training of neural networks that include modulation layers in their architectures.

Key Features

  • Adds new utility functions for differentiable modulation operations in differentiable.py
  • Extends base modulator and demodulator classes with forward_soft methods for differentiable processing
  • Implements differentiable modulation for BPSK, QPSK, and QAM schemes
  • Adds comprehensive tests for differentiable operations

Implementation Details

The implementation preserves backward compatibility while adding new capabilities:

  • BaseModulator and BaseDemodulator classes are extended with forward_soft methods
  • New utility functions in differentiable.py provide core operations for differentiable modulation
  • The existing forward methods remain unchanged
  • Each modulation scheme has been updated to properly handle soft bit probabilities

Testing

Added comprehensive test suite in tests/modulations/test_differentiable.py to verify:

  • Differentiability of individual operations
  • Gradient flow through modulation and demodulation operations
  • End-to-end differentiability of the entire modulation/demodulation pipeline

Use Case Example

# Create modulators and demodulators
modulator = QPSKModulator()
demodulator = QPSKDemodulator()

# Create soft bits with gradients (represents probabilities from a neural network)
soft_bits = torch.tensor([0.1, 0.9, 0.2, 0.8], requires_grad=True)

# Apply differentiable modulation
symbols = modulator.forward_soft(soft_bits)

# Apply channel effects
# ...

# Apply differentiable demodulation
noise_var = 0.1
decoded_bits = demodulator.forward_soft(symbols, noise_var)

# Compute loss and backpropagate gradients
loss = some_loss_function(decoded_bits, target_bits)
loss.backward()  # Gradients will flow through the entire pipeline

Future Work

  • Add differentiable implementations for other modulation schemes
  • Optimize computational efficiency of differentiable operations
  • Provide more examples for neural network training with differentiable modulations

…, and QAM

- Implement `forward_soft` methods in BaseModulator and BaseDemodulator for soft bit modulation and demodulation.
- Introduce differentiable operations in the new `differentiable.py` module.
- Create unit tests for differentiable operations in `test_differentiable.py`.
- Update existing modulation classes to support soft bit processing.
@selimfirat selimfirat self-assigned this Apr 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant