You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -54,17 +54,14 @@ there is no `tff.AGGREGATORS` placement. Without such a placement, implementing
54
54
aggregation protocols in TFF can require low-level programming of the TFF executor
55
55
stacks, as evidenced by
56
56
[this community attempt to integrate secure aggregation](https://github.com/tf-encrypted/rfcs/blob/master/20190924-tensorflow-federated/integration-strategies.md).
57
-
By adding a `tff.AGGREGATORS` placement, users can more easily implement new
57
+
By adding a new `tff.AGGREGATORS` placement, users can more easily implement new
58
58
aggregation protocols by expressing them as federated computations in FC.
59
59
60
60
## User Benefit
61
61
62
-
*How will users (or other contributors) benefit from this work? What would be the
63
-
headline in the release notes or blog post?*
64
-
65
-
Users can now express custom aggregation protocols in the Federated Core by working
66
-
with federated data placed on `tff.AGGREGATORS`. Users will be unencumbered by the
67
-
constraints of the current federated types in FC.
62
+
Users are unencumbered by constraints of the current federated types in FC when
63
+
expressing custom aggregation protocols. Users can now do so in the Federated Core
64
+
by working directly with federated values placed on `tff.AGGREGATORS`.
68
65
69
66
## Design Proposal
70
67
@@ -74,57 +71,90 @@ idea, and list pros/cons to each approach. If there are alternatives that you
74
71
have eliminated, you should also list those here, and explain why you believe
75
72
your chosen approach is superior.
76
73
77
-
Make sure you’ve thought through and addressed the following sections. If a section is not relevant to your specific proposal, please explain why, e.g. your RFC addresses a convention or process, not an API.*
74
+
Adding the `tff.AGGREGATORS` placement for federated types involves adding a new `Placement` and `PlacementLiteral`, and then extending the compiler to recognize federated values with this placement when computing intrinsics. The compiler generally recognizes separate intrinsics by placement; e.g., `tff.federated_value(value, placement)` is actually interpreted by the compiler as `federated_value_at_clients(value)` or `federated_value_at_server(value)`, depending on the provided `placement`. This means we we will want to add new intrinsics that correspond to `tff.AGGREGATORS`, e.g. `federated_value_at_aggregators`.
75
+
76
+
Existing federated computation that will need modification fall into the two categories below:
77
+
78
+
1. Intrinsics for federated computations that are already parameterized by placement. Note some of these functions don't have a `placement` arg in their public API signature, but internally correspond to different IntrinsicDefs based on placement of their federated input(s).
79
+
-`federated_eval`
80
+
-`federated_map`
81
+
-`federated_value`
82
+
-`federated_zip`
83
+
-`sequence_map`
84
+
2. Intrinsics that will need to be parameterized by placement, but currently aren't.
85
+
-`federated_aggregate`
86
+
-`federated_broadcast`
87
+
-`federated_collect`
88
+
-`federated_mean`
89
+
-`federated_reduce`
90
+
-`federated_secure_sum`
91
+
-`federated_sum`
92
+
-`sequence_reduce`
93
+
-`sequence_sum`
78
94
95
+
Intrinsics in the latter category will likely need further discussion. This is because implementation details could change aspects of the underlying "federated algebra", like closure, or could introduce subtle semantic changes.
96
+
97
+
As an example, assume we extend `federated_collect` to handle signatures of `T@CLIENTS -> T*@AGGREGATORS` and `T@AGGREGATORS -> T*@SERVER` (in addition to the current `CLIENTS -> SERVER`). If we want to maintain algebraic closure, we would extend `federated_broadcast` to handle `T@SERVER -> {T}@AGGREGATORS` and `T@AGGREGATORS -> {T}@CLIENTS`; similarly, we'd extend `sequence_reduce` to handle values of type `T@AGGREGATORS`. In this scenario, the new `federated_broadcast` would be a natural generalization of the old, however it's not clear if this kind of semantic change would be confusing to users of the FC.
98
+
99
+
We hope this will be a good starting point for discussion. Ultimately, this RFC process should result in the exact type signatures that each of the modified/new IntrinsicDefs should satisfy.
79
100
80
101
### Alternatives Considered
81
-
A lower effort alternative might be to expect users to write custom executors, or custom executor stacks, to include additional "aggregator" parties when executing intrinisics. AGGREGATORS would stay outside of the FC type system, but could be sill be included in federated computations. This allows library designers to extend TFF for their own use cases. This is a major disadvantage, since users are only expected to be familiar with TFF Federated Learning (FL) or FC APIs, and this is a feature that would be useful to the majority of TFF users.
102
+
A lower effort alternative might be to expect users to write custom executors, or custom executor stacks, to include additional "aggregator" parties when executing intrinisics. "AGGREGATORS" would stay outside of the FC type system, but could still be included in federated computations. This might allow library designers to extend TFF for their own use cases, but hinders the majority of TFF users who are not expected to learn the executor API.
103
+
104
+
We also briefly considered the name `tff.AGGREGATOR` instead of `tff.AGGREGATORS`. We decided on the latter for two reasons:
105
+
1.`tff.AGGREGATOR` does not capture the possibility of multiple executor stacks coordinating aggregation. The existing `ComposingExecutor` qualifies as one such case.
106
+
2.`tff.AGGREGATOR` is equivalent to a singleton `tff.AGGREGATORS`.
82
107
83
108
### Performance Implications
84
-
* Do you expect any (speed / memory)? How will you confirm?
85
-
* There should be microbenchmarks. Are there?
86
-
* There should be end-to-end tests and benchmarks. If there are not (since this is still a design), how will you track that these will be created?
109
+
This is an additive improvement to the FC, so there should be no performance implications for existing functionality. TFF is designed to support this kind of addition with minimal overhead. New functionality could be less performant relative to current practices, but only from overhead inherent to adding a new node to a distributed computation.
87
110
88
111
### Dependencies
89
-
* Dependencies: does this proposal add any new dependencies to TensorFlow?
90
-
* Dependent projects: are there other areas of TensorFlow or things that use TensorFlow (TFX/pipelines, TensorBoard, etc.) that this affects? How have you identified these dependencies and are you sure they are complete? If there are dependencies, how are you managing those changes?
112
+
This change brings no new dependencies. Since this proposal adds a new federated type, any project that enforces limits based on the current federated types may have to be updated. We will work with the TFF team to identify any affected projects and limit any breaking changes.
91
113
92
114
### Engineering Impact
93
-
* Do you expect changes to binary size / startup time / build time / test times?
94
-
* Who will maintain this code? Is this code in its own buildable unit? Can this code be tested in its own? Is visibility suitably restricted to only a small API surface for others to use?
115
+
This code will likely bring marginal increases to build and test time, but changes to binary size should be negligible. Executor factories including a stack for the `tff.AGGREGATORS` placement will experience a nontrivial increase in startup time, but not all executor factories will need to include a stack for this placement.
95
116
96
-
### Platforms and Environments
97
-
* Platforms: does this work on all platforms supported by TensorFlow? If not, why is that ok? Will it work on embedded/mobile? Does it impact automatic code generation or mobile stripping tooling? Will it work with transformation tools?
98
-
* Execution environments (Cloud services, accelerator hardware): what impact do you expect and how will you confirm?
117
+
The code for this change will be mixed into existing modules in the TF Federated core. Since it affects the type system used by the TFF compiler and requires that relevant intrinsic definitions be modified to recognize a new placement, it will touch many different places in the TFF stack. Those who already own and maintain those code units will maintain and improve the change in the future, which makes their feedback critical throughout design and implementation.
99
118
100
119
### Best Practices
101
-
* Does this proposal change best practices for some aspect of using/developing TensorFlow? How will these changes be communicated/enforced?
120
+
The new `Placement` for federated types brings an addition to the Federated Core, which will be communicated in the TFF API documentation. This will only be relevant for users of the lower-level Federated Core, at least until a higher level API is included that relies on it. Below, we detail how this change should be communicated by existing tutorials.
102
121
103
122
### Tutorials and Examples
104
-
* If design changes existing API or creates new ones, the design owner should create end-to-end examples (ideally, a tutorial) which reflects how new feature will be used. Some things to consider related to the tutorial:
105
-
- The minimum requirements for this are to consider how this would be used in a Keras-based workflow, as well as a non-Keras (low-level) workflow. If either isn’t applicable, explain why.
106
-
- It should show the usage of the new feature in an end to end example (from data reading to serving, if applicable). Many new features have unexpected effects in parts far away from the place of change that can be found by running through an end-to-end example. TFX [Examples](https://github.com/tensorflow/tfx/tree/master/tfx/examples) have historically been good in identifying such unexpected side-effects and are as such one recommended path for testing things end-to-end.
107
-
- This should be written as if it is documentation of the new feature, i.e., consumable by a user, not a TensorFlow developer.
108
-
- The code does not need to work (since the feature is not implemented yet) but the expectation is that the code does work before the feature can be merged.
123
+
Since this is a modification of an existing API, it likely does not warrant a new tutorial. We instead suggest modifying the existing [Part 1 Federated Core tutorial (FC 1)](https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_1) to include one or more federated computations that operate on Aggregator-placed data. We also considered modifying part 2 of the FC tutorial, but decided against that due to its stated goals.
124
+
125
+
Concretely, we recommend two modifications to the FC 1 tutorial:
126
+
- In the "Placement" section, the discussion will need to include the `tff.AGGREGATORS` placement. This section should stress that the placement can be considered optional, whereas the others (`tff.CLIENTS`, `tff.SERVER`) are strictly necessary for most interesting federated computations.
127
+
- In the "Composing Federated Computations" section, we recommend adding a short sub-section or paragraph that describes how one might refactor the `get_average_temperature` function to perform its `federated_mean` with a placement of `tff.AGGREGATORS`. We include short and long form examples below for consideration.
return tff.federated_map(lambdax: x / num_clients, total_temp)
143
+
```
109
144
110
145
### Compatibility
111
146
* Does the design conform to the backwards & forwards compatibility [requirements](https://www.tensorflow.org/programmers_guide/version_compat)?
112
-
* How will this proposal interact with other parts of the TensorFlow Ecosystem?
113
-
- How will it work with TFLite?
114
-
- How will it work with distribution strategies?
115
-
- How will it interact with tf.function?
116
-
- Will this work on GPU/TPU?
117
-
- How will it serialize to a SavedModel?
118
147
119
-
### User Impact
120
-
* What are the user-facing changes? How will this feature be rolled out?
148
+
Since this design adds new functionality, it would change the public API. While TFF is still pre-1.0, it does not yet explicitly guarantee backwards compatibility of its public API. Nevertheless, we can hope to limit impact on the public API through judicious use of default keyword arguments.
121
149
122
-
## Detailed Design
150
+
Concretely, we can maintain backwards compatibility for federated computations that gain a `placement` keyword argument by defaulting that argument to `tff.SERVER`. We recommend _judicious_ use because there may be instances where a change in semantic justifies a breaking change. These should be taken on a case-by-case basis, and we hope to clearly define and justify any breaking changes that might arise.
123
151
124
-
This section is optional. Elaborate on details if they’re important to
125
-
understanding the design, but would make it hard to read the proposal section
126
-
above.
152
+
This design does not significantly impact compatibility with the rest of the TF ecosystem.
127
153
128
154
## Questions and Discussion Topics
129
155
130
-
Seed this with open questions you require feedback on from the RFC process.
156
+
- Which of the intrinsics above should actually be modified/parameterized?
157
+
- How strict should we be about algebraic closure in the federated type system? There could be an argument against, e.g. if we want to limit which intrinsics can ever involve `tff.AGGREGATORS`.
158
+
- Are the existing tutorial changes sufficient? What is the best way to communicate these changes in existing documentation?
159
+
- What should the implementation/release strategy be? Should this wait until TFF 1.x.x?
160
+
- Once changes to current intrinsics have been planned, what qualifies as a "judicious" use of defaults for maintaining backwards compatiblity?
0 commit comments