Day 1 - Second Half (2)
Volume
- By default all container in same Pods will share storage (emptyDir)
- Data in container/Pods is “ephemeral” that may loss when Pods was restart/distribute
- Kubernetes support many type of volume on system
-
List of support storage o EmptyDir
-
hostPath
-
gcePersistent
-
DiskawsElasticBlockStore
-
Nfs
-
Iscsi
-
fc(fibrechannel)
-
Flocker
-
Glusterfs
-
Rbd
-
Cephfs
-
gitRepo
-
Secret
-
persistentVolumeClaim (PVC) (Talk Day2)
-
downwardAPI
-
Projected
-
azureFileVolume
-
azureDisk
-
vsphereVolume
-
Quobyte
-
PortworxVolume
-
ScaleIO
-
StorageOS
-
Local
-
ConfigMap
-
Secret
-
csi
-
Host Path
- Mount file/directory on host to Pods
Workshop 5: Volume
$ kubectl create -f https://raw.githubusercontent.com/praparn/kubernetes_20180701/master/WorkShop_1.5_Volume/cadvisor_deploy.yml
service "cadvisor" created
deployment "cadvisor" created
Open cAdvisor at http://192.168.99.100:31000/containers/
cAdvisor shows only real-time data. To keep statistics, feed into Prometheus.
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
cadvisor-76c9899d7f-m456k 1/1 Running 0 2m
$ kubectl exec -it cadvisor-76c9899d7f-m456k sh
/ # touch /var/lib/docker/test_cAdvisor.txt
/ # exit
$ minikube ssh
/ # ls -al /var/lib/docker
/ # exit
$ kubectl create -f https://raw.githubusercontent.com/praparn/kubernetes_20180701/master/WorkShop_1.5_Volume/webtest_pod.yml
pod "webtest" created
MacSilver:~ pacroy$ kubectl exec -it webtest sh
/usr/src/app # ls -lh /temp
/usr/src/app # touch /temp/test_webtest.txt
/usr/src/app # exit
$ kubectl exec -it cadvisor-76c9899d7f-m456k sh
/ # ls -lh /var/lib/docker
/ # rm /var/lib/docker/test_webtest.txt
/ # rm /var/lib/docker/test_cAdvisor.txt
/ # exit
$ kubectl delete -f https://raw.githubusercontent.com/praparn/kubernetes_20180701/master/WorkShop_1.5_Volume/cadvisor_deploy.yml
service "cadvisor" deleted
$ kubectl delete deployment/cadvisor
deployment "cadvisor" deleted
Liveness and Readiness Probe
-
Normally kubernetes will check Pods status and process criteria for orchestrator following “restartPolicy”
-
Always (Default)
-
On-Failure
-
None
-
-
Pods status is depend on “ContainerState” in that Pods (1-M)
-
Waiting (ContainerStateWaiting)
-
Running (ContainerStateRunning)
-
Terminated (ContainerStateTerminated)
-
-
When Pods receive status from container. It will set Pods’s status to inform kubelet for operate with Pods (Remain / Terminated / Restart)
-
Pods Status (Phase Value)
-
Pending: When initial Pods and loading images for container
-
Running: Some container running but not all
-
Successed: All container fullfill running
-
Failed: All container exit fail
-
Unknown: Can not monitor state
-
-
-
Kubernetes have function for make properly make sure that container (inside Pods) still healthy by three options
-
ExecAction: Execute some command inside container (expect result: return 0)
-
TCPSocketAction: Start TCP communication on specific port of container (expect result: port open)
-
HTTPGetAction: Send http/https request to specific port and path (expect result: return 200 for OK)
-
-
Probe’s result
-
Success
-
Failed
-
Unknown
-
-
livenessProbe and readinessProbe Difference
-
livenessProbe: check status of container is running properly or not ?
-
readinessProbe: check readiness for process service or not ?
-
ExecAction
apiVersion: apps/v1
kind: Deployment
metadata:
spec:
replicas: 1
selector:
template:
metadata:
spec:
containers:
- name: webtest
image: labdocker/cluster:webservicelite_v1
ports:
- containerPort: 5000
protocol: TCP
readinessProbe:
exec:
command:
- cat
- /usr/src/app/main.py
initialDelaySeconds: 15
periodSeconds: 5
livenessProbe:
exec:
command:
- cat
- /usr/src/app/main.py
initialDelaySeconds: 15
periodSeconds: 15
Workshop 6: Liveness and Readiness Probe
Part 1 - ExecAction
$ kubectl create -f https://raw.githubusercontent.com/praparn/kubernetes_20180701/master/WorkShop_1.6_Liveness_Readiness_Probe/webtest_deploy_liveness_readiness_exec.yml
deployment "webtest" created
$ kubectl create -f https://raw.githubusercontent.com/praparn/kubernetes_20180701/master/WorkShop_1.6_Liveness_Readiness_Probe/webtest_svc.yml
service "webtest" created
$ kubectl get svc/webtest
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
webtest 10.104.87.156 <nodes> 5000:32500/TCP 14s
$ kubectl get deployment/webtest
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
webtest 1 1 1 1 28s
$ kubectl describe deployment/webtest
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
webtest-b989c98c5 1 1 1 37s
$ kubectl describe rs/webtest-b989c98c5
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
webtest-b989c98c5-9xbg7 1/1 Running 0 1m environment=development,module=WebServer,name=web,owner=Praparn_L,pod-template-hash=654575471,version=1.0
$ kubectl describe pods/webtest-b989c98c5-9xbg7
$ kubectl exec -it webtest-b989c98c5-9xbg7 sh -c webtest
/usr/src/app # rm main.py
/usr/src/app # exit
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
webtest-b989c98c5-9xbg7 0/1 Running 0 4m environment=development,module=WebServer,name=web,owner=Praparn_L,pod-template-hash=654575471,version=1.0
$ kubectl describe pods/webtest-b989c98c5-9xbg7
...
Events:
FirstSeen LastSeen Count From SubObjectPath TypeReason Message
--------- -------- ----- ---- ------------- -------- ------ -------
6m 6m 1 default-scheduler Normal Scheduled Successfully assigned webtest-b989c98c5-9xbg7 to minikube
6m 6m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-f7lwn"
1m 1m 7 kubelet, minikube spec.containers{webtest} Warning Unhealthy Readiness probe failed: cat: can't open '/usr/src/app/main.py': No such file or directory
1m 1m 3 kubelet, minikube spec.containers{webtest} Warning Unhealthy Liveness probe failed: cat: can't open '/usr/src/app/main.py': No such file or directory
6m 1m 2 kubelet, minikube spec.containers{webtest} Normal Pulled Container image "labdocker/cluster:webservicelite_v1" already present on machine
6m 1m 2 kubelet, minikube spec.containers{webtest} Normal Created Created container
6m 1m 2 kubelet, minikube spec.containers{webtest} Normal Started Started container
1m 1m 1 kubelet, minikube spec.containers{webtest} Normal Killing Killing container with id docker://webtest:Container failed liveness probe.. Container will be killed and recreated.
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
webtest-b989c98c5-9xbg7 1/1 Running 1 8m environment=development,module=WebServer,name=web,owner=Praparn_L,pod-template-hash=654575471,version=1.0
$ kubectl exec -it webtest-b989c98c5-9xbg7 sh -c webtest
/usr/src/app # ls -lh
/usr/src/app # exit
$ kubectl delete deployment/webtest
deployment "webtest" deleted
Part 2 - TCPSocketAction
$ kubectl apply -f webtest_deploy_liveness_readiness_port.yml
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
webtest-7546dd995b-fnkvq 1/1 Running 0 32s environment=development,module=WebServer,name=web,owner=Praparn_L,pod-template-hash=3102885516,version=1.0
# Change monitor port from 5000 to 3000
$ vim webtest_deploy_liveness_readiness_port.yml
$ kubectl apply -f webtest_deploy_liveness_readiness_port.yml
deployment "webtest" configured
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
webtest-56775d5f5b-km2rh 0/1 Running 0 5s environment=development,module=WebServer,name=web,owner=Praparn_L,pod-template-hash=1233181916,version=1.0
webtest-7546dd995b-fnkvq 1/1 Running 0 2m environment=development,module=WebServer,name=web,owner=Praparn_L,pod-template-hash=3102885516,version=1.0
When using kubectl apply
, it will create a new ReplicaSet. If the new one is not yet ready, the old is still running to keep uptime.
Events:
FirstSeen LastSeen Count From SubObjectPath TypeReason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5m 5m 1 default-scheduler Normal Scheduled Successfully assigned webtest-56775d5f5b-km2rh to minikube
5m 5m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-f7lwn"
5m 4m 2 kubelet, minikube spec.containers{webtest} Normal Pulled Container image "labdocker/cluster:webservicelite_v1" already present on machine
5m 4m 2 kubelet, minikube spec.containers{webtest} Normal Created Created container
5m 4m 2 kubelet, minikube spec.containers{webtest} Normal Started Started container
4m 4m 1 kubelet, minikube spec.containers{webtest} Normal Killing Killing container with id docker://webtest:Container failed liveness probe.. Container will be killed and recreated.
4m 3m 4 kubelet, minikube spec.containers{webtest} Warning Unhealthy Liveness probe failed: dial tcp 172.17.0.5:3000: getsockopt: connection refused
4m 3m 13 kubelet, minikube spec.containers{webtest} Warning Unhealthy Readiness probe failed: dial tcp 172.17.0.5:3000: getsockopt: connection refused
$ kubectl delete deployment/webtest
deployment "webtest" deleted
Part 3 - HTTPGetAction
$ kubectl apply -f webtest_deploy_liveness_readiness_http.yml
deployment "webtest" created
$ curl -I http://192.168.99.100:32500
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 113
Server: Werkzeug/0.12.2 Python/2.7.13
Date: Sun, 08 Jul 2018 09:23:28 GMT
# Edit file webtest_deploy_liveness_readiness_http.yml to change httpHeaders path from / to /init
$ vi webtest_deploy_liveness_readiness_http.yml
$ kubectl apply -f webtest_deploy_liveness_readiness_http.yml
deployment "webtest" configured
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
webtest-54bc78fc89-bffj4 0/1 Running 0 5s environment=development,module=WebServer,name=web,owner=Praparn_L,pod-template-hash=1067349745,version=1.0
webtest-7786b88db9-cgvds 1/1 Running 0 3m environment=development,module=WebServer,name=web,owner=Praparn_L,pod-template-hash=3342644865,version=1.0
# Edit file webtest_deploy_liveness_readiness_port.yml to change httpHeaders path from /init to /
$ vi webtest_deploy_liveness_readiness_http.yml
$ kubectl apply -f webtest_deploy_liveness_readiness_http.yml
deployment "webtest" configured
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
webtest-54bc78fc89-bffj4 0/1 Terminating 4 1m environment=development,module=WebServer,name=web,owner=Praparn_L,pod-template-hash=1067349745,version=1.0
webtest-7786b88db9-cgvds 1/1 Running 0 5m environment=development,module=WebServer,name=web,owner=Praparn_L,pod-template-hash=3342644865,version=1.0
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
webtest-7786b88db9-cgvds 1/1 Running 0 5m environment=development,module=WebServer,name=web,owner=Praparn_L,pod-template-hash=3342644865,version=1.0
$ kubectl delete deployment/webtest
deployment "webtest" deleted
Part 4 - Exec Probe for Readiness Probe
- Use Case: Stop a deployment from accepting loads by make readiness probe failed.
Please note that we never delete the service as it works the same for every deployments. i.e. Service is totally separated from Deployment/Pods
Resource Management and HPA
-
Kubernetes can manage resource in 2 ways
-
Container level configuration
-
Scope on Container unit independence
-
Use in case experiment/test purpose
-
etc
-
-
Namespace level configuration
-
Scope on Pods and Container level
-
Create global limit and assign to namespace
-
Apply namespace to any Pods (Template Resource Limit)
-
-
-
What happen when Container resource exceed ?
-
CPU
-
Container will not get CPU as it need and the free-slot of CPU was distributed to all container’s request with ratio
-
-
Memory
-
Container was killed and restart it again
-
-
If not specified, the container gets default recourse size set in either Namespace or system.
- Container level configuration
-
CPU
- Request:
- XXXm(Unit: millicores) (Ratio: 1024)
- 0.1= 100m
- 1 CPU =
- 1 vCPU (AWS)
- 1 GCP (Google Cloud)
- 1 Azure v Core
- 1 Hyper thread (Bare Meterial)
- Consider as “--cpu-share”
- Limit:
- XXXm(Unit: millicores) (Ratio:100000/1000: ~1000)
- Consider as “--cpu-quota”
- Request:
- Memory
- Request:
- XXX (Unit:Ki, Mi, Gi, Pi, Ei)
- Limit
- XXX (Unit:Ki, Mi, Gi, Pi, Ei)
- Request:
-
CPU
Workshop 7: Resource Management and HPA
Install Addon Heapster
$ minikube addons enable heapster
heapster was successfully enabled
$ minikube addons open heapster
This addon does not have an endpoint defined for the 'addons open' command
You can add one by annotating a service with the label kubernetes.io/minikube-addons-endpoint:heapster
$ minikube addons list
- addon-manager: enabled
- coredns: disabled
- dashboard: enabled
- default-storageclass: enabled
- efk: disabled
- freshpod: disabled
- heapster: enabled
- ingress: disabled
- kube-dns: enabled
- metrics-server: disabled
- registry: disabled
- registry-creds: disabled
- storage-provisioner: enabled
Create Metric Server
$ git clone https://github.com/kubernetes-incubator/metrics-server.git
Cloning into 'metrics-server'...
remote: Counting objects: 6176, done.
remote: Compressing objects: 100% (21/21), done.
remote: Total 6176 (delta 7), reused 9 (delta 2), pack-reused 6152
Receiving objects: 100% (6176/6176), 6.48 MiB | 1.55 MiB/s, done.
Resolving deltas: 100% (2973/2973), done.
Checking out files: 100% (2430/2430), done.
$ cd metrics-server/
$ kubectl create -f deploy/1.8+/
clusterrolebinding "metrics-server:system:auth-delegator" created
rolebinding "metrics-server-auth-reader" created
apiservice "v1beta1.metrics.k8s.io" created
serviceaccount "metrics-server" created
deployment "metrics-server" created
service "metrics-server" created
clusterrole "system:metrics-server" created
clusterrolebinding "system:metrics-server" created
Part 1 - Container Level Configuration
$ kubectl create -f https://raw.githubusercontent.com/praparn/kubernetes_20180701/master/WorkShop_1.7_Resource_Management/webtest_pod.yml
pod "webtest" created
$ kubectl create -f https://raw.githubusercontent.com/praparn/kubernetes_20180701/master/WorkShop_1.7_Resource_Management/cadvisor_deploy.yml
service "cadvisor" created
deployment "cadvisor" created
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
cadvisor-76c9899d7f-29t9d 1/1 Running 0 15s environment=development,module=Monitor,name=cadvisor,owner=Praparn_L,pod-template-hash=3275455839,version=1.0
webtest 1/1 Running 0 26s environment=development,module=WebServer,name=web,owner=Praparn_L,version=1.0
Open cAdvisor at http://192.168.99.100:31000/containers/
Open a new session.
$ minikube ssh
$ export TERM=xterm
$ top
Open a new session and try to run md5 to generate load.
$ kubectl exec webtest -c webtest md5sum /dev/urandom
Describe using kubectl describe node
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
915m (45%) 1 (50%) 176Mi (9%) 202Mi (10%)
Part 2 - Namespace Quota
Create a Namespace
$ kubectl create namespace webtest-namespace
namespace "webtest-namespace" created
$ kubectl create -f https://raw.githubusercontent.com/praparn/kubernetes_20180701/master/WorkShop_1.7_Resource_Management/webtest_quota.yml --namespace=webtest-namespace
resourcequota "webtest-quota" created
$ kubectl describe namespace/webtest-namespace
Name: webtest-namespace
Labels: <none>
Annotations: <none>
Status: Active
Resource Quotas
Name: webtest-quota
Resource Used Hard
-------- --- ---
limits.cpu 0 4
limits.memory 0 4Gi
pods 0 4
requests.cpu 0 1
requests.memory 0 1Gi
No resource limits.
Create a Deployment
$ kubectl create -f https://raw.githubusercontent.com/praparn/kubernetes_20180701/master/WorkShop_1.7_Resource_Management/webtest_deploy.yml --namespace=webtest-namespace
service "webtest" created
deployment "webtest" created
$ kubectl get deployment/webtest --namespace=webtest-namespace
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
webtest 1 0 0 0 9s
$ kubectl get rs --namespace=webtest-namespace
NAME DESIRED CURRENT READY AGE
webtest-77794cd4b 1 0 0 15s
$ kubectl get svc/webtest --namespace=webtest-namespace
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
webtest 10.108.193.222 <nodes> 5000:32500/TCP 19s
$ kubectl get pods --namespace=webtest-namespace
No resources found.
Check Why Deployment is Failed
$ kubectl describe rs --namespace=webtest-namespace
...
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------------- -------
1m 1m 1 replicaset-controller WarningFailedCreate Error creating: pods "webtest-77794cd4b-xslcm" is forbidden: failed quota: webtest-quota: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
1m 1m 1 replicaset-controller WarningFailedCreate Error creating: pods "webtest-77794cd4b-g7l8r" is forbidden: failed quota: webtest-quota: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
1m 1m 1 replicaset-controller WarningFailedCreate Error creating: pods "webtest-77794cd4b-d4x77" is forbidden: failed quota: webtest-quota: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
1m 1m 1 replicaset-controller WarningFailedCreate Error creating: pods "webtest-77794cd4b-hqvmd" is forbidden: failed quota: webtest-quota: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
1m 1m 1 replicaset-controller WarningFailedCreate Error creating: pods "webtest-77794cd4b-5cnw7" is forbidden: failed quota: webtest-quota: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
1m 1m 1 replicaset-controller WarningFailedCreate Error creating: pods "webtest-77794cd4b-wtz7d" is forbidden: failed quota: webtest-quota: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
1m 1m 1 replicaset-controller WarningFailedCreate Error creating: pods "webtest-77794cd4b-wcrdt" is forbidden: failed quota: webtest-quota: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
1m 1m 1 replicaset-controller WarningFailedCreate Error creating: pods "webtest-77794cd4b-mllxn" is forbidden: failed quota: webtest-quota: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
1m 1m 1 replicaset-controller WarningFailedCreate Error creating: pods "webtest-77794cd4b-bvcx4" is forbidden: failed quota: webtest-quota: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
1m 26s 5 replicaset-controller WarningFailedCreate (combined from similar events): Error creating: pods "webtest-77794cd4b-f4cbc" is forbidden: failed quota: webtest-quota: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
Create Limit Range
$ kubectl create -f https://raw.githubusercontent.com/praparn/kubernetes_20180701/master/WorkShop_1.7_Resource_Management/webtest_limit.yml --namespace=webtest-namespace
limitrange "webtest-limit" created
$ kubectl describe namespace/webtest-namespace
Name: webtest-namespace
Labels: <none>
Annotations: <none>
Status: Active
Resource Quotas
Name: webtest-quota
Resource Used Hard
-------- --- ---
limits.cpu 0 4
limits.memory 0 4Gi
pods 0 4
requests.cpu 0 1
requests.memory 0 1Gi
Resource Limits
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Pod cpu 200m 1 - - -
Pod memory 6Mi 1Gi - - -
Container cpu 100m 1 200m 300m -
Container memory 3Mi 1Gi 100Mi 200Mi -
Redeploy
$ kubectl create -f https://raw.githubusercontent.com/praparn/kubernetes_20180701/master/WorkShop_1.7_Resource_Management/webtest_deploy.yml --namespace=webtest-namespace
service "webtest" created
deployment "webtest" created
$ kubectl get deployment/webtest --namespace=webtest-namespace
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
webtest 1 1 1 1 8s
$ kubectl get pods --namespace=webtest-namespace
NAME READY STATUS RESTARTS AGE
webtest-77794cd4b-nnjzn 1/1 Running 0 55s
$ kubectl describe pods webtest-77794cd4b-nnjzn --namespace=webtest-namespace
...
Limits:
cpu: 300m
memory: 200Mi
Requests:
cpu: 200m
memory: 100Mi
Cleanup
$ kubectl delete deployments/webtest --namespace=webtest-namespace
deployment "webtest" deleted
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
cadvisor 1 1 1 1 39m
$ kubectl delete deployments/cadvisor
deployment "cadvisor" deleted
Horizontal Pod Autoscaling (HPA)
-
The scale is easy to operate with ”
kubectl scale --replicas=XXX deployment/<name>
” with single command -
But..
-
How can you scale meet up/down actual required ?
-
-
HPA will response for monitor workload on Pods (Now base on CPU) and automatic trigger deployment to scale-up application
Kubernetes is replacing Heapster with Metric server, but not yet completed. So Kubernetes is still supporting both.
No Comments