Skip to content

Commit 33f1862

Browse files
committed
Run gendocs
1 parent aacc4c8 commit 33f1862

File tree

210 files changed

+599
-27
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

210 files changed

+599
-27
lines changed

docs/README.md

+1
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
# Kubernetes Documentation: releases.k8s.io/HEAD
3435

3536
* The [User's guide](user-guide/README.md) is for anyone who wants to run programs and

docs/admin/README.md

+2
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
# Kubernetes Cluster Admin Guide
3435

3536
The cluster admin guide is for anyone creating or administering a Kubernetes cluster.
@@ -72,6 +73,7 @@ If you are modifying an existing guide which uses Salt, this document explains [
7273
project.](salt.md).
7374

7475
## Upgrading a cluster
76+
7577
[Upgrading a cluster](cluster-management.md).
7678

7779
## Managing nodes

docs/admin/accessing-the-api.md

+3
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
# Configuring APIserver ports
3435

3536
This document describes what ports the kubernetes apiserver
@@ -42,6 +43,7 @@ in [Accessing the cluster](../user-guide/accessing-the-cluster.md).
4243

4344

4445
## Ports and IPs Served On
46+
4547
The Kubernetes API is served by the Kubernetes APIServer process. Typically,
4648
there is one of these running on a single kubernetes-master node.
4749

@@ -93,6 +95,7 @@ variety of uses cases:
9395
setup time. Kubelets use cert-based auth, while kube-proxy uses token-based auth.
9496

9597
## Expected changes
98+
9699
- Policy will limit the actions kubelets can do via the authed port.
97100
- Scheduler and Controller-manager will use the Secure Port too. They
98101
will then be able to run on different machines than the apiserver.

docs/admin/admission-controllers.md

+1
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
# Admission Controllers
3435

3536
**Table of Contents**

docs/admin/authentication.md

+1
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
# Authentication Plugins
3435

3536
Kubernetes uses client certificates, tokens, or http basic auth to authenticate users for API calls.

docs/admin/authorization.md

+3
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
# Authorization Plugins
3435

3536

@@ -53,6 +54,7 @@ The following implementations are available, and are selected by flag:
5354
`ABAC` allows for user-configured authorization policy. ABAC stands for Attribute-Based Access Control.
5455

5556
## ABAC Mode
57+
5658
### Request Attributes
5759

5860
A request has 4 attributes that can be considered for authorization:
@@ -105,6 +107,7 @@ To permit any user to do something, write a policy with the user property unset.
105107
To permit an action Policy with an unset namespace applies regardless of namespace.
106108

107109
### Examples
110+
108111
1. Alice can do anything: `{"user":"alice"}`
109112
2. Kubelet can read any pods: `{"user":"kubelet", "resource": "pods", "readonly": true}`
110113
3. Kubelet can read and write events: `{"user":"kubelet", "resource": "events"}`

docs/admin/cluster-components.md

+2
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
# Kubernetes Cluster Admin Guide: Cluster Components
3435

3536
This document outlines the various binary components that need to run to
@@ -92,6 +93,7 @@ These controllers include:
9293
selects a node for them to run on.
9394

9495
### addons
96+
9597
Addons are pods and services that implement cluster features. They don't run on
9698
the master VM, but currently the default setup scripts that make the API calls
9799
to create these pods and services does run on the master VM. See:

docs/admin/cluster-large.md

+3
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,11 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
# Kubernetes Large Cluster
3435

3536
## Support
37+
3638
At v1.0, Kubernetes supports clusters up to 100 nodes with 30 pods per node and 1-2 container per pod (as defined in the [1.0 roadmap](../../docs/roadmap.md#reliability-and-performance)).
3739

3840
## Setup
@@ -59,6 +61,7 @@ To avoid running into cloud provider quota issues, when creating a cluster with
5961
* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers rate limit the creation of VMs.
6062

6163
### Addon Resources
64+
6265
To prevent memory leaks or other resource issues in [cluster addons](../../cluster/addons/) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](https://github.com/GoogleCloudPlatform/kubernetes/pull/10653/files) and [#10778](https://github.com/GoogleCloudPlatform/kubernetes/pull/10778/files)).
6366

6467
For example:

docs/admin/cluster-management.md

+1
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
# Cluster Management
3435

3536
This doc is in progress.

docs/admin/cluster-troubleshooting.md

+6
Original file line numberDiff line numberDiff line change
@@ -30,13 +30,16 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
# Cluster Troubleshooting
35+
3436
This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
3537
problem you are experiencing. See
3638
the [application troubleshooting guide](../user-guide/application-troubleshooting.md) for tips on application debugging.
3739
You may also visit [troubleshooting document](../troubleshooting.md) for more information.
3840

3941
## Listing your cluster
42+
4043
The first thing to debug in your cluster is if your nodes are all registered correctly.
4144

4245
Run
@@ -48,15 +51,18 @@ kubectl get nodes
4851
And verify that all of the nodes you expect to see are present and that they are all in the ```Ready``` state.
4952

5053
## Looking at logs
54+
5155
For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations
5256
of the relevant log files. (note that on systemd-based systems, you may need to use ```journalctl``` instead)
5357

5458
### Master
59+
5560
* /var/log/kube-apiserver.log - API Server, responsible for serving the API
5661
* /var/log/kube-scheduler.log - Scheduler, responsible for making scheduling decisions
5762
* /var/log/kube-controller-manager.log - Controller that manages replication controllers
5863

5964
### Worker Nodes
65+
6066
* /var/log/kubelet.log - Kubelet, responsible for running containers on the node
6167
* /var/log/kube-proxy.log - Kube Proxy, responsible for service load balancing
6268

docs/admin/dns.md

+1
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
# DNS Integration with Kubernetes
3435

3536
As of kubernetes 0.8, DNS is offered as a [cluster add-on](../../cluster/addons/README.md).

docs/admin/high-availability.md

+17-1
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
# High Availability Kubernetes Clusters
3435

3536
**Table of Contents**
@@ -43,6 +44,7 @@ Documentation for other releases can be found at
4344
<!-- END MUNGE: GENERATED_TOC -->
4445

4546
## Introduction
47+
4648
This document describes how to build a high-availability (HA) Kubernetes cluster. This is a fairly advanced topic.
4749
Users who merely want to experiment with Kubernetes are encouraged to use configurations that are simpler to set up such as
4850
the simple [Docker based single node cluster instructions](../../docs/getting-started-guides/docker.md),
@@ -52,6 +54,7 @@ Also, at this time high availability support for Kubernetes is not continuously
5254
be working to add this continuous testing, but for now the single-node master installations are more heavily tested.
5355

5456
## Overview
57+
5558
Setting up a truly reliable, highly available distributed system requires a number of steps, it is akin to
5659
wearing underwear, pants, a belt, suspenders, another pair of underwear, and another pair of pants. We go into each
5760
of these steps in detail, but a summary is given here to help guide and orient the user.
@@ -68,6 +71,7 @@ Here's what the system should look like when it's finished:
6871
Ready? Let's get started.
6972

7073
## Initial set-up
74+
7175
The remainder of this guide assumes that you are setting up a 3-node clustered master, where each machine is running some flavor of Linux.
7276
Examples in the guide are given for Debian distributions, but they should be easily adaptable to other distributions.
7377
Likewise, this set up should work whether you are running in a public or private cloud provider, or if you are running
@@ -78,6 +82,7 @@ instructions at [https://get.k8s.io](https://get.k8s.io)
7882
describe easy installation for single-master clusters on a variety of platforms.
7983

8084
## Reliable nodes
85+
8186
On each master node, we are going to run a number of processes that implement the Kubernetes API. The first step in making these reliable is
8287
to make sure that each automatically restarts when it fails. To achieve this, we need to install a process watcher. We choose to use
8388
the ```kubelet``` that we run on each of the worker nodes. This is convenient, since we can use containers to distribute our binaries, we can
@@ -98,6 +103,7 @@ On systemd systems you ```systemctl enable kubelet``` and ```systemctl enable do
98103

99104

100105
## Establishing a redundant, reliable data storage layer
106+
101107
The central foundation of a highly available solution is a redundant, reliable storage layer. The number one rule of high-availability is
102108
to protect the data. Whatever else happens, whatever catches on fire, if you have the data, you can rebuild. If you lose the data, you're
103109
done.
@@ -109,6 +115,7 @@ size of the cluster from three to five nodes. If that is still insufficient, yo
109115
[even more redundancy to your storage layer](#even-more-reliable-storage).
110116

111117
### Clustering etcd
118+
112119
The full details of clustering etcd are beyond the scope of this document, lots of details are given on the
113120
[etcd clustering page](https://github.com/coreos/etcd/blob/master/Documentation/clustering.md). This example walks through
114121
a simple cluster set up, using etcd's built in discovery to build our cluster.
@@ -130,6 +137,7 @@ for ```${NODE_IP}``` on each machine.
130137

131138

132139
#### Validating your cluster
140+
133141
Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with
134142

135143
```
@@ -146,6 +154,7 @@ You can also validate that this is working with ```etcdctl set foo bar``` on one
146154
on a different node.
147155

148156
### Even more reliable storage
157+
149158
Of course, if you are interested in increased data reliability, there are further options which makes the place where etcd
150159
installs it's data even more reliable than regular disks (belts *and* suspenders, ftw!).
151160

@@ -162,9 +171,11 @@ for each node. Throughout these instructions, we assume that this storage is mo
162171

163172

164173
## Replicated API Servers
174+
165175
Once you have replicated etcd set up correctly, we will also install the apiserver using the kubelet.
166176

167177
### Installing configuration files
178+
168179
First you need to create the initial log file, so that Docker mounts a file instead of a directory:
169180

170181
```
@@ -183,12 +194,14 @@ Next, you need to create a ```/srv/kubernetes/``` directory on each node. This
183194
The easiest way to create this directory, may be to copy it from the master node of a working cluster, or you can manually generate these files yourself.
184195

185196
### Starting the API Server
197+
186198
Once these files exist, copy the [kube-apiserver.yaml](high-availability/kube-apiserver.yaml) into ```/etc/kubernetes/manifests/``` on each master node.
187199

188200
The kubelet monitors this directory, and will automatically create an instance of the ```kube-apiserver``` container using the pod definition specified
189201
in the file.
190202

191203
### Load balancing
204+
192205
At this point, you should have 3 apiservers all working correctly. If you set up a network load balancer, you should
193206
be able to access your cluster via that load balancer, and see traffic balancing between the apiserver instances. Setting
194207
up a load balancer will depend on the specifics of your platform, for example instructions for the Google Cloud
@@ -203,6 +216,7 @@ For external users of the API (e.g. the ```kubectl``` command line interface, co
203216
them to talk to the external load balancer's IP address.
204217

205218
## Master elected components
219+
206220
So far we have set up state storage, and we have set up the API server, but we haven't run anything that actually modifies
207221
cluster state, such as the controller manager and scheduler. To achieve this reliably, we only want to have one actor modifying state at a time, but we want replicated
208222
instances of these actors, in case a machine dies. To achieve this, we are going to use a lease-lock in etcd to perform
@@ -226,6 +240,7 @@ by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kub
226240
directory.
227241

228242
### Running the podmaster
243+
229244
Now that the configuration files are in place, copy the [podmaster.yaml](high-availability/podmaster.yaml) config file into ```/etc/kubernetes/manifests/```
230245

231246
As before, the kubelet on the node monitors this directory, and will start an instance of the podmaster using the pod specification provided in ```podmaster.yaml```.
@@ -236,6 +251,7 @@ the kubelet will restart them. If any of these nodes fail, the process will mov
236251
node.
237252

238253
## Conclusion
254+
239255
At this point, you are done (yeah!) with the master components, but you still need to add worker nodes (boo!).
240256

241257
If you have an existing cluster, this is as simple as reconfiguring your kubelets to talk to the load-balanced endpoint, and
@@ -244,7 +260,7 @@ restarting the kubelets on each node.
244260
If you are turning up a fresh cluster, you will need to install the kubelet and kube-proxy on each worker node, and
245261
set the ```--apiserver``` flag to your replicated endpoint.
246262

247-
##Vagrant up!
263+
## Vagrant up!
248264

249265
We indeed have an initial proof of concept tester for this, which is available [here](../../examples/high-availability/).
250266

docs/admin/kube-apiserver.md

+1
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
## kube-apiserver
3435

3536

docs/admin/kube-controller-manager.md

+1
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
## kube-controller-manager
3435

3536

docs/admin/kube-proxy.md

+1
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
## kube-proxy
3435

3536

docs/admin/kube-scheduler.md

+1
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
## kube-scheduler
3435

3536

docs/admin/kubelet.md

+1
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
## kubelet
3435

3536

docs/admin/multi-cluster.md

+2
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
# Considerations for running multiple Kubernetes clusters
3435

3536
You may want to set up multiple kubernetes clusters, both to
@@ -65,6 +66,7 @@ Reasons to have multiple clusters include:
6566
- test clusters to canary new Kubernetes releases or other cluster software.
6667

6768
## Selecting the right number of clusters
69+
6870
The selection of the number of kubernetes clusters may be a relatively static choice, only revisited occasionally.
6971
By contrast, the number of nodes in a cluster and the number of pods in a service may be change frequently according to
7072
load and growth.

docs/admin/networking.md

+1
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
# Networking in Kubernetes
3435

3536
**Table of Contents**

docs/admin/node.md

+1
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
# Node
3435

3536
**Table of Contents**

docs/admin/ovs-networking.md

+1
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
3030
<!-- END STRIP_FOR_RELEASE -->
3131

3232
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
3334
# Kubernetes OpenVSwitch GRE/VxLAN networking
3435

3536
This document describes how OpenVSwitch is used to setup networking between pods across nodes.

0 commit comments

Comments
 (0)