Replies: 2 comments 3 replies
-
I did figure out a way to mimic what i am looking for. This is attached as the on_fail_callback on any tasks before the mapped ones. Only works though because the DAGs don't branch. def fail_all_downstream(context): |
Beta Was this translation helpful? Give feedback.
0 replies
-
I have hard time understanding what you want to say and do here. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have seen some chatter about All_Done_One_Success Trigger but i didn't find anywhere that it was considered for addition. I feel like with the introduction of mapped tasks this could be very useful. My example use cause:
Get x orders from source(sometimes there two steps before it gets to mapping) > map x tasks to process orders > map x tasks for sending orders
Now the way the way i have the mapping tasks set up is that they always map off the get step and have their trigger rule set as all done. There is a check in the operator to see if the data mapped from the get step actually was processed in the previous step, and it errors out if it didn't. This allows us to easily go back and clear a set of tasks that belong to the same order without clearing the whole task to remap the newly succeeded ones. The problem here is if the get step fails the mapping tasks try to run but they don't have anything to map so the DAG just sits there running with no tasks running.
Now this trigger rule would solve this, because i do want the mapping tasks to wait for upstream to be done, but if there wasn't any successes then there isn't any data to map with so they should just go upstream_failed.
That being said, maybe there's a simpler approach that just has the mapping fail if it didn't find any data to map with. Thoughts?
Beta Was this translation helpful? Give feedback.
All reactions