You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The big change here (other than just moving code around) is to introduce a
conceptual separation between Definition/Execution time and Scheduler time.
This means that the expansion of tasks (creating the TaskInstance rows with
different map_index values) is now done on the scheduler, and we now
deserialize to different classes. For example, when we deserialize the
`DictOfListsExpandInput` it gets turned into an instance of
SchedulerDictOfListsExpandInput. This is primarily designed so that DB access
is kept 100% out of the TaskSDK.
Some of the changes here are on the "wat" side of the scale, and this is
mostly designed to not break 100% of our tests, and we have #45549 to look at
that more holistically.
To support the "reduce" style task which takes as input a sequence of all the
pushed (mapped) XCom values, and to keep the previous behaviour of not loading
all values in to memory at once, we have added a new HEAD route to the Task
Execution interface that returns the number of mapped XCom values so that it
is possible to implement `__len__` on the new LazyXComSequence class.
This change also changes when and where in the TaskSDK exeuction time code we
render templates and send RTIF fields to the server. This is needed because
calling `render_templates` also expands the Mapped operator. As a result the
`startup` call parses the dag, renders templates and performs the runtime
checks (currently checking Inlets and Outlets with the API server) and returns
the context. This context is important as the `ti.task` _in that context_ is
unnmapped if required.
I have deleted a tranche of tests from tests/models that were to do with
runtime behavoir and and now tested in the TaskSDK instead.
0 commit comments