Skip to content

Commit

Permalink
Apply automated document enhancement modifications (#3165)
Browse files Browse the repository at this point in the history
Applies the more straightforward automated document enhancement
modifications.

### Description

Applies the more straightforward automated document enhancement
modifications that are the same as what was merged in the 2.5 branch.

### Types of changes
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [x] Non-breaking change (fix or new feature that would not break
existing functionality).
- [ ] Breaking change (fix or new feature that would cause existing
functionality to change).
- [ ] New tests added to cover the changes.
- [ ] Quick tests passed locally by running `./runtest.sh`.
- [ ] In-line docstrings updated.
- [ ] Documentation updated.

Co-authored-by: Ziyue Xu <[email protected]>
Co-authored-by: Chester Chen <[email protected]>
  • Loading branch information
3 people authored Feb 2, 2025
1 parent f8dd354 commit 0329d12
Show file tree
Hide file tree
Showing 88 changed files with 233 additions and 238 deletions.
4 changes: 2 additions & 2 deletions .github/ISSUE_TEMPLATE/question.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
name: Question (please use the Discussion tab)
name: Question (please use the Discussions tab)
about: https://github.com/NVIDIA/NVFlare/discussions
title: 'Please use NVFlare Discussion tab for questions'
title: 'Please use NVFlare's Discussions tab for questions'
labels: ''
assignees: ''
---
Expand Down
4 changes: 2 additions & 2 deletions CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ COMMUNITY | DEVELOPERS | PROJECT LEADS

## Our Pledge

In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to make participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.

## Our Standards

Expand All @@ -34,7 +34,7 @@ Examples of unacceptable behavior by participants include:

Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.

Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned with this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.

## Scope

Expand Down
2 changes: 1 addition & 1 deletion docs/example_applications_algorithms.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ NVIDIA FLARE has several tutorials and examples to help you get started with fed

1. Hello World Examples
=======================
Can be run from the :github_nvflare_link:`hello_world notebook <examples/hello-world/hello_world.ipynb>`.
Can be run from :github_nvflare_link:`hello_world notebook <examples/hello-world/hello_world.ipynb>`.

.. toctree::
:maxdepth: 1
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/hello_scatter_and_gather.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Due to the simplified weights, you will be able to clearly see and understand
the results of the FL aggregation and the model persistor process.

The setup of this exercise consists of one **server** and two **clients**.
The server side model starting with weights ``[[1, 2, 3], [4, 5, 6], [7, 8, 9]]``.
The server-side model starts with weights ``[[1, 2, 3], [4, 5, 6], [7, 8, 9]]``.

The following steps compose one cycle of weight updates, called a **round**:

Expand Down
2 changes: 1 addition & 1 deletion docs/fl_introduction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ FL Terms and Definitions

.. note::

Here we describe the centralized version of FL, where the FL server has the role of the aggregrator node. However in a decentralized version such as
Here we describe the centralized version of FL, where the FL server has the role of the aggregator node. However in a decentralized version such as
swarm learning, FL clients can serve as the aggregator node instead.

- Types of FL
Expand Down
8 changes: 4 additions & 4 deletions docs/flare_overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ Federated Computing

At its core, FLARE serves as a federated computing framework, with applications such as Federated Learning and Federated Analytics built upon this foundation.
Notably, it is agnostic to datasets, workloads, and domains. In contrast to centralized data lake solutions that necessitate copying data to a central location, FLARE brings computing capabilities directly to distributed datasets.
This approach ensures that data remains within the compute node, with only pre-approved, selected results shared among collaborators.
Moreover, FLARE is system agnostic, offering easy integration with various data processing frameworks through the implementation of the FLARE client.
This approach ensures that data remains within the compute node, with only pre-approved, selected results being shared among collaborators.
Moreover, FLARE is system-agnostic, offering easy integration with various data processing frameworks through the implementation of the FLARE client.
This client facilitates deployment in sub-processes, Docker containers, Kubernetes pods, HPC, or specialized systems.

Built for productivity
Expand Down Expand Up @@ -106,7 +106,7 @@ High-level System Architecture

As detailed above, FLARE incorporates components that empower researchers and developers to construct and deploy end-to-end federated learning applications.
The high-level architecture, depicted in the diagram below, encompasses the foundational layer of the FLARE communication, messaging streaming layers, and tools dedicated to privacy preservation and secure platform management.
Atop this foundation lie the building blocks for federated learning applications, featuring a suite of federation workflows and learning algorithms.
Atop this foundation are the building blocks for federated learning applications, featuring a suite of federation workflows and learning algorithms.
Adjacent to this central stack are tools facilitating experimentation and simulation with the FL Simulator and POC CLI, complemented by a set of tools designed for the deployment and management of production workflows.

.. image:: resources/flare_overview.png
Expand All @@ -123,7 +123,7 @@ Design Principles

**Less is more**
We strive to solve unique challenges by doing less while enabling others to do more.
We can't solve whole worlds' problems, but by building an open platform we can enable others to solve world's problems.
We can't solve whole world's problems, but by building an open platform, we can enable others to solve them.
This design principle means we intentionally limit the scope of the implementation, only building the necessary components.
For a given implementation, we follow specifications in a way that allows others to easily customize and extend.

Expand Down
7 changes: 3 additions & 4 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,10 +59,9 @@ Additional examples can be found at the :ref:`Examples Applications <example_app
FLARE for Users
===============
If you want to learn how to interact with the FLARE system, please refer to the :ref:`User Guide <user_guide>`.
When you are ready to for a secure, distributed deployment, the :ref:`Real World Federated Learning <real_world_fl>` section covers the tools and process
required to deploy and operate a secure, real-world FLARE project.
When you are ready for a secure, distributed deployment, the :ref:`Real World Federated Learning <real_world_fl>` section covers the tools and processes required to deploy and operate a secure, real-world FLARE project.

FLARE for Developers
====================
When you're ready to build your own application, the :ref:`Programming Guide <programming_guide>`, :ref:`Programming Best Practices <best_practices>`, :ref:`FAQ<faq>`, and :ref:`API Reference <apidocs/modules>`
give an in depth look at the FLARE platform and APIs.
When you're ready to build your own application, the :ref:`Programming Guide <programming_guide>`, :ref:`Programming Best Practices <best_practices>`, :ref:`FAQ <faq>`, and :ref:`API Reference <apidocs/modules>`
provide an in-depth look at the FLARE platform and APIs.
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@ example that implements the :class:`cross site model evaluation workflow<nvflare

Previously in NVFlare before version 2.0, cross-site validation was built into the framework itself, and there was an
admin command to retrieve cross-site validation results. In NVFlare 2.0, with the ability to have customized
workflows, cross-site validation is no longer in the NVFlare framework but is instead handled by the workflow. The
the :github_nvflare_link:`cifar10 example <examples/advanced/cifar10>` is configured to run cross-site
workflows, cross-site validation is no longer in the NVFlare framework but is instead handled by the workflow.
The :github_nvflare_link:`cifar10 example <examples/advanced/cifar10>` is configured to run cross-site
model evaluation and ``config_fed_server.json`` is configured with :class:`ValidationJsonGenerator<nvflare.app_common.widgets.validation_json_generator.ValidationJsonGenerator>`
to write the results to a JSON file on the server.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

Scatter and Gather Workflow
---------------------------
The Federated scatter and gather workflow is an included reference implementation of the default workflow of previous versions
of NVIDIA FLARE with a Server aggregating results from Clients that have produced Shareable results from their Trainer.
The federated scatter and gather workflow is an included reference implementation of the default workflow in previous versions
of NVIDIA FLARE, with a server aggregating results from clients that have produced shareable results from their trainer.

At the core, the control_flow of :class:`nvflare.app_common.workflows.scatter_and_gather.ScatterAndGather` is a for loop:

Expand All @@ -26,15 +26,15 @@ Learnable
For example, in the deep learning scenario, it can be the model weights.
In the AutoML case, it can be the network architecture.

A :class:`LearnablePersistor<nvflare.app_common.abstract.learnable_persistor.LearnablePersistor>` defines how to load
A :class:`LearnablePersistor<nvflare.app_common.abstract.learnable_persistor.LearnablePersistor>` defines how to load and save
and save a ``Learnable``. ``Learnable`` is a subset of the model file (which can contain other data like LR schedule)
which is to be learned, like the model weights.

.. _aggregator:

Aggregator
^^^^^^^^^^
:class:`Aggregators<nvflare.app_common.abstract.aggregator.Aggregator>` define the aggregation algorithm to aggregate the ``Shareable``.
:class:`Aggregator<nvflare.app_common.abstract.aggregator.Aggregator>` defines the aggregation algorithm to aggregate the ``Shareable``.
For example, a simple aggregator would be just average all the ``Shareable`` of the same round.

Below is the signature for an aggregator.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Requirements
Depending on where the trainer is running, the connection may or may not need to be in secure mode (TLS).
- We will need to modify the "project.yml" for NVFlare provision system
and generate new package folders for each participating sites
- The trainer must be a Python program that can integrate with the NVFLARE library.
- The trainer must be a Python program that can integrate with the NVFlare library.
- The trainer must be able to connect to the server, as well as the address that
is dynamically opened by the FL client.

Expand Down
18 changes: 9 additions & 9 deletions docs/programming_guide/execution_api_type/client_api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -248,16 +248,16 @@ that use Client API to write the

Selection of Job Templates
==========================
To help user quickly setup job configurations, we create many job templates. You can pick one job template that close to your use cases
and adapt to your needs by modify the needed variables.
To help users quickly set up job configurations, we have created numerous job templates. You can select a job template that closely matches
your use case and adapt it to your needs by modifying the necessary variables.

use command ``nvflare job list_templates`` you can find all job templates nvflare provided.
Using the command ``nvflare job list_templates``, you can find all the job templates provided by NVFlare.

.. image:: ../../resources/list_templates_results.png
:height: 300px

looking at the ``Execution API Type``, you will find ``client_api``. That's indicates the specified job template will use
Client API configuration. You can further nail down the selection by choice of machine learning framework: pytorch or sklearn or xgboost,
looking at the ``Execution API Type``, you will find ``client_api``. This indicates that the specified job template will use the Client API
configuration. You can further nail down the selection by choice of machine learning framework: pytorch or sklearn or xgboost,
in-process or not, type of models ( GNN, NeMo LLM), workflow patterns ( Swarm learning or standard fedavg with scatter and gather (sag)) etc.


Expand All @@ -271,11 +271,11 @@ For example:
.. code-block:: python
class CustomClass:
def __init__(self, x, y):
self.x = 1
self.y = 2
def __init__(self, x, y):
self.x = 1
self.y = 2
If you are using classes derived from ``Enum`` or dataclass, they will be handled by the default decomposers.
If your code uses classes derived from ``Enum`` or dataclasses, they will be handled by the default decomposers.
For other custom classes, you will need to write a dedicated custom decomposer and ensure it is registered
using fobs.register on both the server side and client side, as well as in train.py.

Expand Down
4 changes: 2 additions & 2 deletions docs/programming_guide/filters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,9 +48,9 @@ For an example application using SVTPrivacy, see :github_nvflare_link:`Different

DXO - Data Exchange Object
===========================
The message object passed between the server and clients are of Shareable class. Shareable is a general structure for all kinds of communication (task interaction, aux messages, fed events, etc.) that in addition to the message payload, also carries contextual information (such as peer FL context). NVFLARE's DXO object is a general-purpose structure that is meant to be used to carry message payload in a self-descriptive manner. As an analogy, think of Shareable as an HTTP message, whereas a DXO as a JPEG image that is carried by the HTTP message.
The message object passed between the server and clients is of the Shareable class. Shareable is a general structure for all kinds of communication (task interaction, aux messages, fed events, etc.) that in addition to the message payload, also carries contextual information (such as peer FL context). NVFLARE's DXO object is a general-purpose structure that is meant to be used to carry message payload in a self-descriptive manner. As an analogy, think of Shareable as an HTTP message, whereas a DXO as a JPEG image that is carried by the HTTP message.

An DXO object has the following properties:
A DXO object has the following properties:

- Data Kind - the kind of data the DXO object carries (e.g. WEIGHTS, WEIGHT_DIFF, COLLECTION of DXOs, etc.)
- Meta - meta properties that describe the data (e.g. whether processed/encrypted and processing algorithm). This is a dict.
Expand Down
6 changes: 3 additions & 3 deletions docs/programming_guide/fl_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ that captures the common attributes needed for exchanging learning results.
This is particularly useful when NVFlare system needs to exchange learning
information with external training scripts/systems.

The external training script/system only need to extract the required
information from received FLModel, run local training, and put the results
The external training script or system only needs to extract the required
information from the received FLModel, run local training, and put the results
in a new FLModel to be sent back.

For a detailed explanation of each attributes, please refer to the API doc:
For a detailed explanation of each attribute, please refer to the API doc:
:mod:`FLModel<nvflare.app_common.abstract.fl_model>`
2 changes: 1 addition & 1 deletion docs/programming_guide/provisioning_system.rst
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ the Project instance:
Participant
-----------
Each participant is one entity that communicates with other participants inside the NVIDIA FLARE system during runtime.
Each participant has the following attributes: type, name, org and props. The attribute ``props`` is a dictionary and
Each participant has the following attributes: type, name, org, and props. The attribute ``props`` is a dictionary and
stores additional information:

.. code-block:: python
Expand Down
2 changes: 1 addition & 1 deletion docs/programming_guide/resource_manager_and_consumer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ You can easily write your own resource manager and consumer following the API sp
@abstractmethod
def report_resources(self, fl_ctx) -> dict:
"""Reports resources."""
Pass
pass
A more friendly interface (AutoCleanResourceManager) is provided as well:
Expand Down
2 changes: 1 addition & 1 deletion docs/programming_guide/system_architecture.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ See the example :ref:`project_yml` for how these components are configured in St
Overseer
--------
The Overseer is a system component that determines the hot FL server at any time for high availability.
The name of the Overseer must be unique and in the format of fully qualified domain names. During
The name of the Overseer must be unique and in the format of a fully qualified domain name. During
provisioning time, if the name is specified incorrectly, either being duplicate or containing incompatible
characters, the provision command will fail with an error message. It is possible to use a unique hostname rather than
FQDN, with the IP mapped to the hostname by having it added to ``/etc/hosts``.
Expand Down
2 changes: 1 addition & 1 deletion docs/real_world_fl/flare_api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ the session in a finally clause:
try:
print(sess.get_system_info())
job_id = sess.submit_job("/workspace/locataion_of_jobs/job1")
job_id = sess.submit_job("/workspace/location_of_jobs/job1")
print(job_id + " was submitted")
# monitor_job() waits until the job is done, see the section about it below for details
sess.monitor_job(job_id)
Expand Down
Loading

0 comments on commit 0329d12

Please sign in to comment.