Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Grammar corrections #229

Merged
merged 1 commit into from
Apr 1, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions content/en/docs/guides/install-guides/common-components.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,11 +88,11 @@ kpt live apply nephio-operator --reconcile-timeout=15m --output=table

## Management Cluster GitOps Tool

A GitOps tool (ConfigSync) is installed to allow GitOps-based application of packages on the management cluster itself.
A GitOps tool (configsync) is installed to allow GitOps-based application of packages on the management cluster itself.
It is not needed if you only want to provision network functions, but it is used extensively in the cluster provisioning
workflows.

Different GitOps tools may be used, but these instructions only cover ConfigSync.
Different GitOps tools may be used, but these instructions only cover configsync.
To install it on the management cluster, we again follow the same process.
Later, we will configure it to point to the *mgmt* repository:

Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/guides/install-guides/explore-sandbox.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ the cluster using the `kpt live status` command on the unpacked packages in the

The rendered *kpt* packages containing components are unpacked in the */tmp/kpt-pkg* directory. The rendered *kpt* packages
that create the *mgmt* and *mgmt-staging* repositories are unpacked in the */tmp/repository* directory. The rendered *kpt*
package containing the *rootsync* configuration for the *mgmt* repository is unpacked in the */tmp/rootsync* directory.
package containing the *RootSync* configuration for the *mgmt* repository is unpacked in the */tmp/rootsync* directory.
You can examine the contents of any rendered *kpt* packager by examining the contents of these directories.

```bash
Expand Down Expand Up @@ -123,7 +123,7 @@ interacts closely with.
| Component | Purpose |
| ------------------ | ------------------------------------------------------------------------------------------------|
| Porch | Google Container Tools Package Orchestration Server, provides an API used by Nephio to work with packages in git repositories |
| ConfigSync | Google Container Tools Configuration Synchronization, used by Nephio to deploy configurations from repositories from the Management cluster onto Workload clusters |
| configsync | Google Container Tools Configuration Synchronization, used by Nephio to deploy configurations from repositories from the Management cluster onto Workload clusters |
| Nephio Controllers | The Nephio controllers, which implement the Nephio functionality to fetch, manipulate, and deploy NFs |
| Nephio WebUI | The Nephio web client |

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/guides/install-guides/install-on-gce.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ browse the Nephio Web UI

## Open Terminal

You will probably want a second SSh window open to run *kubectl* commands, etc.,
You will probably want a second SSH window open to run *kubectl* commands, etc.,
without the port forwarding (which would fail if you try to open a second SSH
connection with that setting).

Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/guides/install-guides/install-on-gcp.md
Original file line number Diff line number Diff line change
Expand Up @@ -508,7 +508,7 @@ Project [your-nephio-project-id] repository [config-control] was cloned to [/hom


Before you start adding things to that repository, set up Config Sync to pull configurations from there by creating a
rootsync in Config Controller. There is a package available to help properly configure the rootsync:
RootSync in Config Controller. There is a package available to help properly configure the RootSync:

```bash
kpt pkg get --for-deployment https://github.com/nephio-project/catalog.git/distros/gcp/cc-rootsync@main
Expand Down Expand Up @@ -585,7 +585,7 @@ Successfully executed 2 function(s) in 1 package(s).

In the sandbox exercises, you may have used `kpt live apply` to apply the package at this point. In this case, there are
restrictions in Config Controller that interfere with the operation of `kpt live`. So, instead, you can just directly
apply the rootsync resources with `kubectl`:
apply the RootSync resources with `kubectl`:

```bash
kubectl apply -f cc-rootsync/rootsync.yaml
Expand Down
16 changes: 8 additions & 8 deletions content/en/docs/guides/user-guides/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,15 +77,15 @@ refer to [Kpt book](https://kpt.dev/book/).
those supported by the kpt CLI, but makes them available as a Kubernetes service. For more details refer to
[Porch user guide](https://kpt.dev/guides/porch-user-guide).

### ConfigSync
### configsync

Config Sync lets cluster operators and platform administrators deploy consistent configurations and policies. You can
configsync lets cluster operators and platform administrators deploy consistent configurations and policies. You can
deploy these configurations and policies to individual Kubernetes clusters, multiple clusters that can span hybrid and
multi-cloud environments, and multiple namespaces within clusters. This process simplifies and automates configuration
and policy management at scale. Config Sync also lets development teams independently manage their namespaces within
and policy management at scale. configsync also lets development teams independently manage their namespaces within
clusters, while still being subject to policy guardrails set by administrators.

Config Sync is an open-source project and the source and releases available
configsync is an open-source project and the source and releases available
[here](https://www.github.com/GoogleContainerTools/kpt-config-sync).

### Packages
Expand Down Expand Up @@ -200,7 +200,7 @@ The diagram below depicts deployment at the high level.

#### Infrastructure Components
* Porch
* ConfigSync
* configsync

#### Nephio Controllers
* Nephio Controller Operator
Expand All @@ -225,7 +225,7 @@ The diagram below depicts deployment at the high level.
### Workload Cluster Components

#### Infrastructure Components
* ConfigSync
* configsync

#### Nephio Controllers
* Watcher agent
Expand All @@ -243,7 +243,7 @@ The diagram below depicts deployment at the high level.

## Management Cluster Details
* Role
* Infrastructure components (Porch, ConfigSync)
* Infrastructure components (Porch, configsync)
* Components
* Specializes
* Injectors
Expand All @@ -254,7 +254,7 @@ The diagram below depicts deployment at the high level.
* Web UI

## Workload Cluster Details
* Infrastructure components (ConfigSync)
* Infrastructure components (configsync)
* Operators
* Controllers

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ These exercises will take you from a system with only the Nephio Management clus

- A Regional cluster
- Two Edge clusters
- Repositories for each cluster, registered with Nephio, and with ConfigSync set up to pull from those repositories.
- Repositories for each cluster, registered with Nephio, and with configsync set up to pull from those repositories.
- Inter-cluster networking between those clusters
- A complete free5GC deployment including:

Expand Down Expand Up @@ -196,7 +196,7 @@ mgmt-08c26219f9879acdefed3469f8c3cf89d5db3868 approved
```


ConfigSync running in the Management cluster will now pull out this new package, create all the resources necessary to
configsync running in the Management cluster will now pull out this new package, create all the resources necessary to
provision a KinD cluster, and register it with Nephio. This will take about five minutes or so.

## Step 2: Check the Regional cluster installation
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ It will take around 15 minutes to create the three clusters. You can check the p
*mgmt* and *mgmt-staging* repository. After couple of minutes you should see three independent repositories (Core,
Regional and Edge) for each workload cluster.

You can also look at the state of packagerevisions for the three packages. You can use the below command
You can also look at the state of PackageRevisions for the three packages. You can use the below command

```bash
kubectl get packagerevisions | grep -E 'core|regional|edge' | grep mgmt
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ Sample output:
packagevariant.config.porch.kpt.dev/flux-regional-cluster created
```

ConfigSync running in the Management cluster will create all the resources necessary to
configsync running in the Management cluster will create all the resources necessary to
provision a KinD cluster, and register it with Nephio. This can take some time.

## Step 2: Check the cluster installation
Expand Down Expand Up @@ -239,8 +239,8 @@ Sample output:
}
```
As you can see, it *sourceRef* references the *regional* GitRepository CR and defaults to the root *path* of the repository.
It also holds a reference to the *kubeConfig* Secret used to access the Workload Cluster, which was created
during the cluster api instantiation.
It also holds a reference to the *kubeconfig* Secret used to access the Workload Cluster, which was created
during the cluster API instantiation.


## Step 4: Deploy a sample workload to the Cluster
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ weight: 2

- A Nephio Management cluster:
- [installation guides](/content/en/docs/guides/install-guides/_index.md) for detailed environment options.
- The following *optional* operator pkg deployed:
- The following *optional* operator package deployed:
- [o2ims operator](/content/en/docs/guides/install-guides/optional-components.md#o2ims-operator)

{{% alert title="Note" color="primary" %}}
Expand Down
10 changes: 5 additions & 5 deletions content/en/docs/network-architecture/o-ran-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,14 +61,14 @@ The expectation is that each O-RAN CNF vendor defines O-Cloud Templates for each

### High-Level design of SMO FOCOM using Nephio in R4

A FOCOM service implementation based on Nephio enablers uses a KPT-based cluster package management solution where:
A FOCOM service implementation based on Nephio enablers uses a Kpt-based cluster package management solution where:

- The stored O-Cloud Template information in FOCOM is realized by a Git based cluster template repository where each O-Cloud Template information record is realized with a KPT package blueprint
- Each FOCOM O-Cloud Template KPT package blueprint refer one-to-one to a corresponding IMS-side O-Cloud Template in a specific O-Cloud IMS
- The stored O-Cloud Template information in FOCOM is realized by a Git based cluster template repository where each O-Cloud Template information record is realized with a Kpt package blueprint
- Each FOCOM O-Cloud Template Kpt package blueprint refer one-to-one to a corresponding IMS-side O-Cloud Template in a specific O-Cloud IMS

The Nephio based FOCOM implementation will use the Porch NBI for deployment of O-Cloud Node Clusters based on the O-Cloud Template. To trigger creation of a new Node Cluster, the applicable O-Cloud Template KPT package blueprint will be cloned into a new draft for a Node Cluster Provisioning Request instance package. Every O-Cloud Template KPT package blueprint will contain two manifest files, one with the O-Cloud Template information and one for a FOCOM Provisioning Request that will carry the instance specific input and be used by FOCOM to generate the O2ims Provisioning Request. Once the draft Provisioning Request instance package has been created the client shall update the FOCOM Provisioning Request manifest file inside the package to add the instance specific input. Finally the client will propose and approve the Provisioning Request instance package and this will trigger FOCOM to start the reconciliation process for the O2ims Provisioning Request towards the O-Cloud IMS.
The Nephio based FOCOM implementation will use the Porch NBI for deployment of O-Cloud Node Clusters based on the O-Cloud Template. To trigger creation of a new Node Cluster, the applicable O-Cloud Template Kpt package blueprint will be cloned into a new draft for a Node Cluster Provisioning Request instance package. Every O-Cloud Template Kpt package blueprint will contain two manifest files, one with the O-Cloud Template information and one for a FOCOM Provisioning Request that will carry the instance specific input and be used by FOCOM to generate the O2ims Provisioning Request. Once the draft Provisioning Request instance package has been created the client shall update the FOCOM Provisioning Request manifest file inside the package to add the instance specific input. Finally the client will propose and approve the Provisioning Request instance package and this will trigger FOCOM to start the reconciliation process for the O2ims Provisioning Request towards the O-Cloud IMS.

Each O-Cloud Template KPT package blueprint will contain a reference to the id of the O-Cloud where this O-Cloud template is supported. Each O-Cloud has been previously registered in FOCOM, in the Nephio R4 implementation this is supported with a separate O-Cloud Registration CR that is manually created in the FOCOM Nephio management cluster. In coming Nephio releases this will be done as part of the O-Cloud registration user story.
Each O-Cloud Template Kpt package blueprint will contain a reference to the id of the O-Cloud where this O-Cloud template is supported. Each O-Cloud has been previously registered in FOCOM, in the Nephio R4 implementation this is supported with a separate O-Cloud Registration CR that is manually created in the FOCOM Nephio management cluster. In coming Nephio releases this will be done as part of the O-Cloud registration user story.

![focom1.png](/static/images/network-architecture/o-ran/focom1.png)

Expand Down
8 changes: 4 additions & 4 deletions content/en/docs/porch/contributors-guide/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Porch comprises of several software components:
handlers, Porch `main` function
* [engine](https://github.com/nephio-project/porch/tree/main/pkg/engine): Core logic of Package Orchestration -
operations on package contents
* [func](https://github.com/nephio-project/porch/tree/main/func): KRM function evaluator microservice; exposes gRPC API
* [func](https://github.com/nephio-project/porch/tree/main/func): KRM function evaluator microservice; exposes GRPC API
* [repository](https://github.com/nephio-project/porch/blob/main/pkg/repository): Repository integration package
* [git](https://github.com/nephio-project/porch/tree/main/pkg/externalrepo/git): Integration with Git repository.
* [oci](https://github.com/nephio-project/porch/tree/main/pkg/externalrepo/oci): Integration with OCI repository.
Expand Down Expand Up @@ -69,7 +69,7 @@ Follow the [Running Porch Locally](../running-porch/running-locally.md) guide to
## Debugging

To debug Porch, run Porch locally [running-locally.md](../running-porch/running-locally.md), exit porch server running
in the shell, and launch Porch under the debugger. VSCode debug session is pre-configured in
in the shell, and launch Porch under the debugger. VS Code debug session is pre-configured in
[launch.json](https://github.com/nephio-project/porch/blob/main/.vscode/launch.json).

Update the launch arguments to your needs.
Expand All @@ -91,7 +91,7 @@ Some useful code pointers:
All tests can be run using `make test`. Individual tests can be run using `go test`.
End-to-End tests assume that Porch instance is running and `KUBECONFIG` is configured
with the instance. The tests will automatically detect whether they are running against
Porch running on local machien or k8s cluster and will start Git server appropriately,
Porch running on local machine or k8s cluster and will start Git server appropriately,
then run test suite against the Porch instance.

## Makefile Targets
Expand All @@ -112,5 +112,5 @@ then run test suite against the Porch instance.
## VS Code

[VS Code](https://code.visualstudio.com/) works really well for editing and debugging.
Just open VSCode from the root folder of the Porch repository and it will work fine. The folder contains the needed
Just open VS Code from the root folder of the Porch repository and it will work fine. The folder contains the needed
configuration to Launch different functions of Porch.
4 changes: 2 additions & 2 deletions content/en/docs/porch/contributors-guide/dev-process.md
Original file line number Diff line number Diff line change
Expand Up @@ -174,8 +174,8 @@ E2E=1 go test -v ./test/e2e/cli -run TestPorch/rpkg-lifecycle

```

To run the end to end tests on your local machine towards a Porch server running in VSCode, be aware of the following if the tests are not running:
- Set the actual load balancer IP address for the function runner in your "launch.json", for example
To run the end to end tests on your local machine towards a Porch server running in VS Code, be aware of the following if the tests are not running:
- Set the actual load balancer IP address for the function runner in your *launch.json*, for example
"--function-runner=172.18.255.201:9445"
- Clear the git cache of your Porch workspace before every test run, for example
`rm -fr <workspace_root_dir>/.cache/git/*`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -120,15 +120,15 @@ You have now set up the VM so that it can be used for remove debugging of Porch.

Use the [VS Code Remote SSH]
(https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh)
plugin to To debug from VS Code running on your local machine towards a VM. Detailed documentation
plugin to debug from VS Code running on your local machine towards a VM. Detailed documentation
on the plugin and its use is available on the
[Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh) in the VS Code
documentation.

1. Use the **Connect to a remote host** instructions on the
[Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh) page to connect to your VM.

2. Click **Open Folder** and browse to the Porch code on the vm, */home/ubuntu/git/github/nephio-project/porch* in this
2. Click **Open Folder** and browse to the Porch code on the VM, */home/ubuntu/git/github/nephio-project/porch* in this
case:

![Browse to Porch code](/static/images/porch/contributor/01_VSCodeOpenPorchFolder.png)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ See more advanced variants of this command in the [detailed description of the d
At this point you are basically ready to start developing porch, but before you start it is worth checking that
everything works as expected.

### Check that the apiservice is ready
### Check that the APIservice is ready

```bash
kubectl get apiservice v1alpha1.porch.kpt.dev
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/porch/function-runner-pod-templates.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ The following contract needs to be fulfilled by any function evaluator pod templ
## Enabling pod templating on function runner

A ConfigMap with the pod template should be created in the namespace where the porch-fn-runner pod is running.
The configMap's name should be included as `--function-pod-template` in the command line arguments in the pod spec of the function runner.
The ConfigMap's name should be included as `--function-pod-template` in the command line arguments in the pod spec of the function runner.

```yaml
...
Expand Down
Loading
Loading