Skip to content

Commit

Permalink
fix(docs): fix multiple typos in documentation (openebs#2806)
Browse files Browse the repository at this point in the history
Signed-off-by: Arpit Pandey <[email protected]>
  • Loading branch information
Arpit Pandey authored and kmova committed Oct 27, 2019
1 parent dcc1c57 commit 219e972
Show file tree
Hide file tree
Showing 13 changed files with 15 additions and 15 deletions.
6 changes: 3 additions & 3 deletions ADOPTERS.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ The list of organizations that have publicly shared the usage of OpenEBS:

| Organization | Stateful Workloads | Success Story |
| :--- | :--- | :--- |
| [Arista Networks](https://www.arista.com/en/) | Gerrit (multiple flavors), NPM, Maven, Redis, NFS, SonarQube, Internal tools | [English](./adopters/arista/README.md) |
| [Arista Networks](https://www.arista.com/en/) | Gerrit (multiple flavors), NPM, Maven, Redis, NFS, Sonarqube, Internal tools | [English](./adopters/arista/README.md) |
| [CLEW Medical](https://clewmed.com/) | PostgreSQL, Keycloak, RabbitMQ | [English](./adopters/clewmedical/README.md) |
| [Clouds Sky GmbH](https://cloudssky.com/en/) | Confluent Kafka, Strimzi Kafka, Elasticsearch, Prometheus | [English](./adopters/cloudssky/README.md) |
| [Code Wave](https://codewave.eu/) | Bitwarden, BookStack, Allegros Ralph, LimeSurvey, Grafana, HackMD/CodiMD, MinIO, Nextcloud, Percona XtraDB Cluster Operator, SonarQube, Sentry, JupyterHub | [English](./adopters/codewave/README.md) |
| [Comcast](https://github.com/Comcast) | Prometheus, Alertmanager, InfluxDB, Helm ChartMuseum | [English](./adopters/comcast/README.md) |
| [Code Wave](https://codewave.eu/) | Bitwarden, Bookstack, Allegros Ralph, Limesurvey, Grafana, Hackmd/Codimd, Minio, Nextcloud, Percona XtraDB Cluster Operator, Nextcloud, Sonarqube, Sentry, Jupyterhub | [English](./adopters/codewave/README.md) |
| [Comcast](https://github.com/Comcast) | Prometheus, Alertmanager, Influxdb, Helm Chartmuseum | [English](./adopters/comcast/README.md) |
| [Plaid Cloud](https://github.com/PlaidCloud) | Redis, Prometheus, Elasticsearch, PostgreSQL | [English](./adopters/plaidcloud/README.md) |


Expand Down
2 changes: 1 addition & 1 deletion adopters/plaidcloud/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
### Type of OpenEBS Storage Engines behind the above applications

- **cStor** (for "monitoring" apps like Prometheus and Elasticsearch)
- **Local PV** (for "customer-facing" apps like Redis, Postgresql, and our own)
- **Local PV** (for "customer-facing" apps like Redis, PostgreSQL, and our own)

Initially we used cStor for all of our apps (separated into "fast" and "slow" storage pools), but recently moved our performance-sensitive workloads to use Local PVs.

Expand Down
2 changes: 1 addition & 1 deletion contribute/design/release-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ OpenEBS Components are released as container images with versioned tags. The rel
The release process involves the following stages:
- Release Candidate Builds
- Update Installer and Documentation
- Update the charts like Helm stable and other partner charts. (Rancher, OpenShift, IBM ICP Communty Charts, Netapp NKS Trusted Charts (formerly StackPointCloud), AWS Marketplace)
- Update the charts like Helm stable and other partner charts. (Rancher, OpenShift, IBM ICP Community Charts, Netapp NKS Trusted Charts (formerly StackPointCloud), AWS Marketplace)
- Update the openebs-operator.yaml
- Final Release

Expand Down
2 changes: 1 addition & 1 deletion k8s/demo/cassandra/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -370,7 +370,7 @@ Node:
Schema:
Keyspace: keyspace1
Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
Replication Strategy Pptions: {replication_factor=1}
Replication Strategy Options: {replication_factor=1}
Table Compression: null
Table Compaction Strategy: null
Table Compaction Strategy Options: {}
Expand Down
2 changes: 1 addition & 1 deletion k8s/demo/crunchy-postgres/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ The verification procedure can be carried out using the following steps:

### Step-1: Install the PostgreSQL-Client

Install the PostgreSQL CLient Utility (psql) on any of the Kubernetes machines to perform database operations
Install the PostgreSQL Client Utility (psql) on any of the Kubernetes machines to perform database operations
from the command line.

```
Expand Down
2 changes: 1 addition & 1 deletion k8s/demo/galera-xtradb-cluster/deployments/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ https://www.percona.com/blog/2015/06/23/percona-xtradb-cluster-pxc-how-many-node
secondary/other nodes. Deploying all YAMLs together can cause the pods to restar repeatedly. Th reason stated in Kubernetes
documentation is:

*If there is a node in wsrep_clsuter_address without a backing galera node there will be nothing to obtain SST from which
*If there is a node in wsrep_cluster_address without a backing galera node there will be nothing to obtain SST from which
will cause the node to shut itself down and the container in question to exit and relaunch.*


Expand Down
2 changes: 1 addition & 1 deletion k8s/demo/percona/manage-mysql-with-volume-snapshots.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ OpenEBS pod names are derived from the volume name, with the string before the "

## Step-5: Delete the Percona pod to force reschedule and remount

- The changes caused by the snapshot restore operation on the databsae can be viewed only when the data volume is remounted.
- The changes caused by the snapshot restore operation on the database can be viewed only when the data volume is remounted.
This can be achieved if the pod is rescheduled. To force a reschedule, delete the pod (since Percona application has been
launched as a Kubernetes deployment, the pod will be rescheduled/recreated on either the same OR on other nodes if available).

Expand Down
2 changes: 1 addition & 1 deletion k8s/jiva/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Download the files for deleting Jiva snapshots from Jiva repository using the fo
wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/jiva/patch.json
wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/jiva/snapshot-cleanup.sh
```
Ensure that `snapshot-cleanup.sh` has execute permission. If not, make it execuatble by running `chmod +x snapshot-cleanup.sh` from the downloaded folder.
Ensure that `snapshot-cleanup.sh` has execute permission. If not, make it executable by running `chmod +x snapshot-cleanup.sh` from the downloaded folder.

Now get the PV name using the following command.
```
Expand Down
2 changes: 1 addition & 1 deletion k8s/lib/vagrant/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ The configuration scripts under the boxes will typically perform the following:
- Download the post-vm boot configuration scripts
- Download the demo yaml spec files.

OpenEBS repository also hosts (in different directory), the scripts required for post-vm initialization tasks like calling "kubeadm join" with required parameters. Simillarly, the sample k8s pod specs are also provided. These scripts and specs are pre-packaged into spec files into setup and demo directories respectively.
OpenEBS repository also hosts (in different directory), the scripts required for post-vm initialization tasks like calling "kubeadm join" with required parameters. Similarly, the sample k8s pod specs are also provided. These scripts and specs are pre-packaged into spec files into setup and demo directories respectively.

- configuration scripts are at [k8s/lib/scripts](https://github.com/openebs/openebs/tree/master/k8s/lib/scripts)
- demo yaml files are at [k8s/demo/specs](https://github.com/openebs/openebs/tree/master/k8s/demo/specs)
Expand Down
2 changes: 1 addition & 1 deletion k8s/openshift/byo/baremetal/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ run applications on it with OpenEBS storage.
- Ansible (>= 2.3) on the local machine or any one of the hosts (typically installed on the host used as openshift-master).
- pyYaml Python package on all the hosts.

- Fucntional DNS server, with all hosts configured by appropriate domain names (Ensure *nslookup* of the hostnames is
- Functional DNS server, with all hosts configured by appropriate domain names (Ensure *nslookup* of the hostnames is
successful in resolving the machine IP addresses).

- Setup passwordless SSH between the Ansible host & other hosts.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -337,7 +337,7 @@ oc apply -f openebs-operator
oc apply -f openebs-storageclasses.yaml
```

-After applying the operator yaml, if you see pod status is in pending state and on describing the maya-apiserver pod the the following error message is found.
-After applying the operator yaml, if you see pod status is in pending state and on describing the maya-apiserver pod the following error message is found.

```
[root@osnode1 ~]# oc get pods -n openebs
Expand Down
2 changes: 1 addition & 1 deletion k8s/openshift/examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ using "Import YAML / JSON" button from the web console.

Templates may require specific versions of OpenShift so they've been namespaced.
At this time, once a new version of Origin is released
the older versions will only receive new content by speficic request.
the older versions will only receive new content by specific request.

Please file an issue at https://github.com/openebs/openebs you'd
like to see older content updated and have tested to ensure it's backwards
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ spec:
- upgrade-cstor-pool-082-090-patch-deployment-version-post-check

# Runtask #14
# This runtask list all the replicaset of the the cstorpool deployment.
# This runtask list all the replicaset of the cstorpool deployment.
- upgrade-cstor-pool-082-090-list-replicaset

# Runtask #15
Expand Down

0 comments on commit 219e972

Please sign in to comment.