Skip to content

Commit 02a3333

Browse files
committed
Replication Sets and Daemon sets added
Signed-off-by: knrt10 <[email protected]>
1 parent 677dd37 commit 02a3333

File tree

5 files changed

+202
-1
lines changed

5 files changed

+202
-1
lines changed
File renamed without changes.

kubia-replicaset.yaml

+19
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
apiVersion: apps/v1
2+
kind: ReplicaSet
3+
metadata:
4+
name: kubia
5+
spec:
6+
replicas: 3
7+
selector:
8+
matchLabels:
9+
app: kubia
10+
template:
11+
metadata:
12+
labels:
13+
app: kubia
14+
spec:
15+
containers:
16+
- name: kubia
17+
image: knrt10/kubia
18+
ports:
19+
- containerPort: 8080

readme.md

+162-1
Original file line numberDiff line numberDiff line change
@@ -90,6 +90,13 @@ This is just a simple demonstration to get a basic understanding of how kubernet
9090
- [Changing the pod template](#changing-the-pod-template)
9191
- [Horizontally scaling pods](#Horizontally-scaling-pods)
9292
- [Deleting a ReplicationController](#deleting-a-replicationController)
93+
- [Using ReplicaSets instead of ReplicationControllers](#using-replicasets-instead-of-replicationControllers)
94+
- [Defining a ReplicaSet](#defining-a-replicaSet)
95+
- [Using the ReplicaSets more expressive label selectors](#Using-the-ReplicaSets-more-expressive-label-selectors)
96+
- [Running exactly one pod on each node with DaemonSets](#running-exactly-one-pod-on-each-node-with-daemonsets)
97+
- [Using a DaemonSet to run a pod on every node](#using-a-daemonset-to-run-a-pod-on-every-node)
98+
- [Explaning Daemon sets with an example](#explaning-daemon-sets-with-an-example)
99+
- [Creating the DaemonSet](#creating-the-daemonset)
93100

94101
4. [Todo](#todo)
95102

@@ -791,7 +798,7 @@ Namespaces enable you to separate resources that don’t belong together into no
791798
A namespace is a Kubernetes resource like any other, so you can create it by posting a
792799
YAML file to the Kubernetes API server. Let’s see how to do this now.
793800

794-
You’re going to create a file called **custom-namespace.yml** (you can create it in any directory you want), or copy from this repo, where you’ll find the file with filename [custom-namespace.yml](https://github.com/knrt10/kubernetes-basicLearning/blob/master/custom-namespace.yml). The following listing shows the entire contents of the file.
801+
You’re going to create a file called **custom-namespace.yaml** (you can create it in any directory you want), or copy from this repo, where you’ll find the file with filename [custom-namespace.yaml](https://github.com/knrt10/kubernetes-basicLearning/blob/master/custom-namespace.yaml). The following listing shows the entire contents of the file.
795802

796803
```yml
797804
apiVersion: v1
@@ -1268,6 +1275,160 @@ When deleting a ReplicationController with kubectl delete, you can keep its pods
12681275

12691276
You’ve deleted the ReplicationController so the pods are on their own. They are no longer managed. But you can always create a new ReplicationController with the proper label selector and make them managed again.
12701277

1278+
### Using ReplicaSets instead of ReplicationControllers
1279+
1280+
Initially, ReplicationControllers were the only Kubernetes component for replicating pods and rescheduling them when nodes failed. Later, a similar resource called a ReplicaSet was introduced. It’s a new generation of ReplicationController and replaces it completely (ReplicationControllers will eventually be deprecated).
1281+
1282+
We could have started this section by creating a ReplicaSet instead of a ReplicationController, but I felt it would be a good idea to start with what was initially available in Kubernetes **(Please Don't report me :wink:)**. Plus, we'll still see ReplicationControllers used in the wild, so it’s good for you to know about them. That said, you should always create ReplicaSets instead of ReplicationControllers from now on. They’re almost identical, so you shouldn’t have any trouble using them instead.
1283+
1284+
A ReplicaSet behaves exactly like a ReplicationController, but it has more expressive pod selectors. Whereas a ReplicationController’s label selector only allows matching pods that include a certain label, a ReplicaSet’s selector also allows matching pods that lack a certain label or pods that include a certain label key, regardless of its value.
1285+
1286+
Also, for example, a single ReplicationController can’t match pods with the label `env=production` and those with the label `env=devel` at the same time. It can only match either pods with the env=production label or pods with the env=devel label. But a single ReplicaSet can match both sets of pods and treat them as a single group.
1287+
1288+
Similarly, a ReplicationController can’t match pods based merely on the presence of a label key, regardless of its value, whereas a ReplicaSet can. For example, a ReplicaSet can match all pods that include a label with the key env, whatever its actual value is(you can think of it as env=*).
1289+
1290+
#### Defining a ReplicaSet
1291+
1292+
You’re going to create a ReplicaSet now to see how the orphaned pods that were created by your ReplicationController and then abandoned earlier can now be adopted by a ReplicaSet.
1293+
1294+
You’re going to create a file called **kubia-replicaset.yaml** (you can create it in any directory you want), or copy from this repo, where you’ll find the file with filename [kubia-replicaset.yaml](https://github.com/knrt10/kubernetes-basicLearning/blob/master/kubia-replicaset.yaml). The following listing shows the entire contents of the file.
1295+
1296+
```yml
1297+
apiVersion: apps/v1
1298+
kind: ReplicaSet
1299+
metadata:
1300+
name: kubia
1301+
spec:
1302+
replicas: 3
1303+
selector:
1304+
matchLabels:
1305+
app: kubia
1306+
template:
1307+
metadata:
1308+
labels:
1309+
app: kubia
1310+
spec:
1311+
containers:
1312+
- name: kubia
1313+
image: knrt10/kubia
1314+
ports:
1315+
- containerPort: 8080
1316+
```
1317+
1318+
The first thing to note is that ReplicaSets aren’t part of the v1 API, so you need to ensure you specify the proper apiVersion when creating the resource. You’re creating a resource of type ReplicaSet which has much the same contents as the ReplicationController you created earlier.
1319+
1320+
The only difference is in the selector. Instead of listing labels the pods need to have directly under the selector property, you’re specifying them under selector `matchLabels`. This is the simpler (and less expressive) way of defining label selectors in a ReplicaSet. Because you still have three pods matching the app=kubia selector running from earlier, creating this ReplicaSet will not cause any new pods to be created. The ReplicaSet will take those existing three pods under its wing. You can create ReplicaSet using `kubectl create` command. Then examine using `kubectl describe` command.
1321+
1322+
As you can see, the ReplicaSet isn’t any different from a ReplicationController. It’s showing it has three replicas matching the selector. If you list all the pods, you’ll see they’re still the same three pods you had before. The ReplicaSet didn’t create any new ones.
1323+
1324+
#### Using the ReplicaSets more expressive label selectors
1325+
1326+
The main improvements of ReplicaSets over ReplicationControllers are their more expressive label selectors. You intentionally used the simpler matchLabels selector in the first ReplicaSet example to see that ReplicaSets are no different from ReplicationControllers.
1327+
1328+
You can add additional expressions to the selector. As in the example, each expression must contain a key, an operator, and possibly (depending on the operator) a list of values. You’ll see four valid operators:
1329+
1330+
- `In`—Label’s value must match one of the specified values.
1331+
- `NotIn`—Label’s value must not match any of the specified values.
1332+
- `Exists`—Pod must include a label with the specified key (the value isn’t important).
1333+
When using this operator, you shouldn’t specify the values field.
1334+
- `DoesNotExist`—Pod must not include a label with the specified key. The values
1335+
property must not be specified.
1336+
1337+
If you specify multiple expressions, all those expressions must evaluate to true for the selector to match a pod. If you specify both matchLabels and matchExpressions, all the labels must match and all the expressions must evaluate to true for the pod to match the selector.
1338+
1339+
This was a quick introduction to ReplicaSets as an alternative to ReplicationControllers. Remember, always use them instead of ReplicationControllers, but you may still find ReplicationControllers in other people’s deployments.
1340+
1341+
Now, delete the ReplicaSet to clean up your cluster a little. You can delete the ReplicaSet the same way you’d delete a ReplicationController:
1342+
1343+
`kubectl delete rs kubia`
1344+
> replicaset "kubia" deleted
1345+
1346+
Deleting the ReplicaSet should delete all the pods. List the pods to confirm that’s the case.
1347+
1348+
### Running exactly one pod on each node with DaemonSets
1349+
1350+
Both RC and RS are used for running specifics number of pods deployed anywhere in Kubernetes cluster. But certain cases exist when you want a pod to run on each and every node in the cluster (and each node needs to run exactly one instance of the pod). Those cases include infrastructure related pods that perform system-level operations.
1351+
1352+
For example, you’ll want to run a log collector and a resource monitor on every node. Another good example is Kubernetes’ own kube-proxy process, which needs to run on all nodes to make services work.
1353+
1354+
![Daemon Sets](https://user-images.githubusercontent.com/24803604/71682868-9183e900-2d56-11ea-9cab-e645ef0564b1.png)
1355+
1356+
> DaemonSets run only a single pod replica on each node, whereas ReplicaSets scatter them around the whole cluster randomly.
1357+
1358+
#### Using a DaemonSet to run a pod on every node
1359+
1360+
To run a pod on all cluster nodes, you create a DaemonSet object, which is much like a ReplicationController or a ReplicaSet, except that pods created by a DaemonSet already have a target node specified and skip the Kubernetes Scheduler. They aren’t scattered around the cluster randomly.
1361+
1362+
A DaemonSet makes sure it creates as many pods as there are nodes and deploys each one on its own node, as shown above. Whereas a ReplicaSet (or ReplicationController) makes sure that a desired number of pod replicas exist in the cluster, a DaemonSet doesn’t have any notion of a desired replica count. It doesn’t need it because its job is to ensure that a pod matching its pod selector is running on each node.
1363+
1364+
If a node goes down, the DaemonSet doesn’t cause the pod to be created elsewhere. But when a new node is added to the cluster, the DaemonSet immediately deploys a new pod instance to it. It also does the same if someone inadvertently deletes one of the pods, leaving the node without the DaemonSet’s pod. Like a ReplicaSet, a DaemonSet creates the pod from the pod template configured in it.
1365+
1366+
#### Explaning Daemon sets with an example
1367+
1368+
Let’s imagine having a daemon called `ssd-monitor` that needs to run on all nodes that contain a solid-state drive (SSD). You’ll create a DaemonSet that runs this daemon on all nodes that are marked as having an SSD. The cluster administrators have added the `disk=ssd` label to all such nodes, so you’ll create the DaemonSet with a node selector that only selects nodes with that label,
1369+
1370+
![creating daemon sets](https://user-images.githubusercontent.com/24803604/71683593-9c3f7d80-2d58-11ea-94a0-ffc9c3cd4071.png)
1371+
1372+
You’ll create a DaemonSet that runs a mock ssd-monitor process, which prints "SSD OK" to the standard output every five seconds. I’ve already prepared the mock container image and pushed it to Docker Hub, so you can use it instead of building your own.
1373+
1374+
You’re going to create a file called **ssd-monitor-daemonset.yaml** (you can create it in any directory you want), or copy from this repo, where you’ll find the file with filename [ssd-monitor-daemonset.yaml](https://github.com/knrt10/kubernetes-basicLearning/blob/master/ssd-monitor-daemonset.yaml). The following listing shows the entire contents of the file.
1375+
1376+
```yml
1377+
apiVersion: apps/v1
1378+
kind: DaemonSet
1379+
metadata:
1380+
name: ssd-monitor
1381+
spec:
1382+
selector:
1383+
matchLabels:
1384+
app: ssd-monitor
1385+
template:
1386+
metadata:
1387+
labels:
1388+
app: ssd-monitor
1389+
spec:
1390+
nodeSelector:
1391+
disk: ssd
1392+
containers:
1393+
- name: main
1394+
image: knrt10/ssd-monitor
1395+
```
1396+
1397+
You’re defining a DaemonSet that will run a pod with a single container based on the `knrt10/ssd-monitor` container image. An instance of this pod will be created for each node that has the `disk=ssd` label.
1398+
1399+
#### Creating the DaemonSet
1400+
1401+
Use `kubectl create` command as you know.
1402+
1403+
`kubectl create -f ssd-monitor-daemonset.yaml`
1404+
1405+
Let’s see the created DaemonSet:
1406+
1407+
`kubectl get ds`
1408+
1409+
```bash
1410+
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
1411+
ssd-monitor 0 0 0 0 0 disk=ssd 18s
1412+
```
1413+
1414+
Those zeroes look strange. Didn’t the DaemonSet deploy any pods? List the pods:
1415+
1416+
`kubectl get po`
1417+
> No resources found.
1418+
1419+
Where are the pods? Do you know what’s going on? Yes, you forgot to label your nodes with the disk=ssd label. No problem—you can do that now. The DaemonSet should detect that the nodes’ labels have changed and deploy the pod to all nodes with a matching label. Let’s see if that’s true. The DaemonSet should have created one pod now. Let’s see:
1420+
1421+
`kubectl label node minikube disk=ssd`
1422+
1423+
```bash
1424+
NAME READY STATUS RESTARTS AGE
1425+
ssd-monitor-zs6sr 1/1 Running 0 6s
1426+
```
1427+
1428+
Okay; so far so good. If you have multiple nodes and you add the same label to further nodes, you’ll see the DaemonSet spin up pods for each of them. Now, imagine you’ve made a mistake and have mislabeled one of the nodes. It has a spinning disk drive, not an SSD. What happens if you change the node’s label?
1429+
1430+
The pod is being terminated. But you knew that was going to happen, right? This wraps up your exploration of DaemonSets, so you may want to delete your ssd-monitor DaemonSet. If you still have any other daemon pods running, you’ll see that deleting the DaemonSet deletes those pods as well.
1431+
12711432
## Todo
12721433

12731434
- [ ] Write more about pods

ssd-monitor-daemonset.yaml

+18
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
apiVersion: apps/v1
2+
kind: DaemonSet
3+
metadata:
4+
name: ssd-monitor
5+
spec:
6+
selector:
7+
matchLabels:
8+
app: ssd-monitor
9+
template:
10+
metadata:
11+
labels:
12+
app: ssd-monitor
13+
spec:
14+
nodeSelector:
15+
disk: ssd
16+
containers:
17+
- name: main
18+
image: knrt10/ssd-monitor

ssd-monitor/Dockerfile

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
FROM busybox
2+
3+
ENTRYPOINT while true; do echo 'SSD OK'; sleep 5; done

0 commit comments

Comments
 (0)