Skip to content

AIP-104: Dynamic Task Iteration and Dynamic Task Partitioning#62922

Draft
dabla wants to merge 231 commits intoapache:mainfrom
dabla:feature/dynamic-task-iteration
Draft

AIP-104: Dynamic Task Iteration and Dynamic Task Partitioning#62922
dabla wants to merge 231 commits intoapache:mainfrom
dabla:feature/dynamic-task-iteration

Conversation

@dabla
Copy link
Copy Markdown
Contributor

@dabla dabla commented Mar 5, 2026


Was generative AI tooling used to co-author this PR?
  • [ x ] Yes (please specify the tool below)

Github Copilot with Claude Opus 4.6 for some parts like setting up tests or improving documentation.

Description

This PR is the initial implementation of Dynamic Task Iteration (DTI), as discussed in the devlist and building upon the foundations of AIP-104.

For further context on the use cases and performance benefits of DTI, see this Medium Article.

The XCom Database Constraint Challenge
While porting our internal "monkey-patched" version of DTI (used since Airflow 2.x) to the core, I've identified a significant technical hurdle regarding XCom handling.

The Issue

Around Airflow 2.10/2.11, a change was introduced to the database constraints for the XCom table. Specifically:

  • Current State: The DB prevents creating indexed XComs (map_index >= 0) unless a corresponding mapped TaskInstance exists in the task_instance table.
  • The Conflict: DTI is designed to process multiple indexed XComs within a single Task Instance. Because there is no 1-to-1 mapping of map_index to a physical TI row, the DB constraint blocks the insertion of these results.

Current Workaround in this PR

To maintain functionality without immediate schema changes, I have implemented a custom XComIterable. This appends the index directly to the XCom key to bypass the constraint and manages the iteration logic internally.

I believe the cleanest path forward is to adjust the DB constraint to allow indexed XComs even in the absence of an indexed TI. This would:

  • Simplify the DTI implementation (e.g. no more need for XComIterable) which would mean the already existing LazyXComIterator would be automatically used.
  • Align the DB schema with the more flexible task patterns introduced in Airflow 3.x.

What this PR doesn't implement yet

The partitioning feature, meaning combining the Dynamic Task Mapping with Dynamic Task Iteration in one fluent API.
Also it doesn't take into account pools yet, at the moment the concurrency is controlled via the max_active_tis_per_dag parameter which if not defined default to os.cpu_count().


  • Read the Pull Request Guidelines for more information. Note: commit author/co-author name and email in commits become permanently public when merged.
  • For fundamental code changes, an Airflow Improvement Proposal (AIP) is needed.
  • When adding dependency, check compliance with the ASF 3rd Party License Policy.
  • For significant user-facing changes create newsfragment: {pr_number}.significant.rst or {issue_number}.significant.rst, in airflow-core/newsfragments.

@dabla dabla requested review from amoghrajesh, ashb and kaxil as code owners March 5, 2026 09:28
@dabla dabla marked this pull request as draft March 5, 2026 09:35
@dabla dabla force-pushed the feature/dynamic-task-iteration branch 2 times, most recently from 23a6d2b to d8a30b9 Compare March 5, 2026 12:33
@dabla dabla force-pushed the feature/dynamic-task-iteration branch from d8a30b9 to edad5de Compare March 5, 2026 12:39
kaxil
kaxil previously requested changes Mar 5, 2026
Copy link
Copy Markdown
Member

@kaxil kaxil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this — excited to see DTI taking shape for Airflow 3.2. I've gone through the full diff and have feedback on the implementation, some are bugs that would crash at runtime, others are design choices worth iterating on.

A few high-level things:

  1. No tests. ~700 lines of new production code with zero test coverage. We need tests for IterableOperator, TaskExecutor, MappedTaskInstance, HybridExecutor, XComIterable, DecoratedDeferredAsyncOperator, and the iterate/iterate_kwargs methods — covering success, failure, retry, deferral, and edge cases.

  2. Worker resilience. Since DTI runs N sub-tasks inside a single worker process, we need to think through what happens when that worker dies mid-execution — the scheduler has no record of which sub-tasks completed. Worth documenting the expected behavior and trade-offs here (and whether we want to add checkpointing later).

  3. Thread safety. Several shared mutable structures (context dict, os.environ) are accessed concurrently from multiple threads without synchronization. This needs to be addressed before merge.

Inline comments below with specifics.

Comment thread task-sdk/src/airflow/sdk/definitions/iterableoperator.py Outdated
Comment thread task-sdk/src/airflow/sdk/bases/operator.py
Comment thread task-sdk/src/airflow/sdk/definitions/mappedoperator.py
Comment thread task-sdk/src/airflow/sdk/definitions/_internal/expandinput.py
Comment thread task-sdk/src/airflow/sdk/definitions/_internal/expandinput.py Outdated
Comment thread task-sdk/src/airflow/sdk/execution_time/executor.py Outdated
Comment thread task-sdk/src/airflow/sdk/execution_time/lazy_sequence.py
Comment thread task-sdk/src/airflow/sdk/definitions/iterableoperator.py Outdated
Comment thread task-sdk/src/airflow/sdk/definitions/iterableoperator.py
Comment thread task-sdk/src/airflow/sdk/definitions/iterableoperator.py Outdated
@dabla
Copy link
Copy Markdown
Contributor Author

dabla commented Mar 5, 2026

Thanks for working on this — DTI is an interesting concept and I can see the use case. I've gone through the full diff and have a number of concerns, some are bugs that would crash at runtime, others are architectural questions worth discussing before this goes further.

A few high-level things:

  1. No tests. ~700 lines of new production code with zero test coverage. We need tests for IterableOperator, TaskExecutor, MappedTaskInstance, HybridExecutor, XComIterable, DecoratedDeferredAsyncOperator, and the iterate/iterate_kwargs methods — covering success, failure, retry, deferral, and edge cases.

Thanks for pointing this out. As mentioned earlier on Slack, this PR is currently intended as an initial draft to demonstrate the concept and gather early architectural feedback.

I agree that proper test coverage is essential before this can move forward. The plan is to add unit tests covering the components you mentioned (IterableOperator, TaskExecutor, MappedTaskInstance, HybridExecutor, XComIterable, DecoratedDeferredAsyncOperator, and the iterate/iterate_kwargs APIs), including scenarios for success, retries, failures, deferral, and edge cases.

Once we converge on the architectural direction, I will add the corresponding test suite.

  1. Architectural concern. This builds a mini-executor inside an operator — running N tasks in threads with in-memory XCom, custom retry logic, and sleep()-based retry delays. The scheduler has no visibility into sub-task states, so if the worker dies mid-execution there's no record of which sub-tasks completed. This feels like it needs broader design discussion (probably an AIP) before merging, since it fundamentally changes how task execution works.

I agree this is an important architectural concern and worth discussing further.

The goal of this prototype is to explore a trade-off between observability and scheduling overhead, @ashb and @potiuk mentioned the same remark before. If we try to preserve the same visibility and lifecycle guarantees as Dynamic Task Mapping, we essentially end up re-implementing DTM semantics, which brings back the same scheduler overhead that this approach is trying to avoid.

This proposal intentionally explores a different point in that trade-off space: executing iterations within a single task while allowing controlled parallelism. That does mean the scheduler has indeed less visibility (but also less load) into the internal execution units.

  1. Thread safety. Several shared mutable structures (context dict, os.environ) are accessed concurrently from multiple threads without synchronization.

Good point — thread safety needs to be handled carefully here.

Regarding the task context, my understanding is that operators already receive a per-task context instance, but you're right that when running iterations concurrently we should avoid sharing mutable structures across threads. One possible approach would be to create a shallow or deep copy of the context for each execution unit to ensure isolation.

If you have concerns about specific structures (e.g., os.environ or others), I'm happy to address them and introduce appropriate synchronization or isolation mechanisms where needed.

@dabla dabla changed the title refactor: Implemented Dynamic Task Iteration Implemented Dynamic Task Iteration Mar 5, 2026
@dabla dabla force-pushed the feature/dynamic-task-iteration branch from 1c2e763 to b855be9 Compare April 29, 2026 07:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants