Push k8s training
commit
e9da09d5f4
@ -0,0 +1,16 @@
|
||||
# cloud-practical
|
||||
|
||||
Skeleton project and files for cloud practicals.
|
||||
|
||||
## Configuration
|
||||
1. Configure a new env for kubernetes
|
||||
```bash
|
||||
# sudo needed to install docker
|
||||
sudo ./setup-env.sh
|
||||
```
|
||||
2. Create cluster
|
||||
```bash
|
||||
k3d cluster create --config <use_your_conf>
|
||||
```
|
||||
3. User kubectl
|
||||
|
||||
@ -0,0 +1,13 @@
|
||||
---
|
||||
apiVersion: k3d.io/v1alpha5
|
||||
kind: Simple
|
||||
metadata:
|
||||
name: upec
|
||||
image: docker.io/rancher/k3s:v1.24.4-k3s1
|
||||
servers: 1
|
||||
agents: 2
|
||||
|
||||
registries:
|
||||
create:
|
||||
host: "127.0.0.1"
|
||||
hostPort: "5000"
|
||||
@ -0,0 +1,100 @@
|
||||
## A simple web application
|
||||
|
||||
You will manipulate pods and configmaps in order to deploy a microservice in kubernetes.
|
||||
|
||||
|
||||
#### 1. Declare a pod
|
||||
|
||||
Complete the following Pod manifest in order to deploy an nginx container with an empty volume attached to
|
||||
`/usr/share/nginx/html`.
|
||||
|
||||
```yaml
|
||||
# To create: kubectl apply -f pod.yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: simple-pod-nginx
|
||||
labels:
|
||||
exo: simple-pod
|
||||
spec:
|
||||
volumes:
|
||||
- emptyDir: {}
|
||||
name: html
|
||||
containers: {} # TODO
|
||||
```
|
||||
|
||||
<br>
|
||||
You can review the results through various kubectl commands.
|
||||
|
||||
```bash
|
||||
# Describe resources and show event logs from Kubernetes controllers
|
||||
kubectl describe pod/simple-pod-nginx
|
||||
|
||||
# Get brief of a pod
|
||||
kubectl get pod/simple-pod-nginx
|
||||
|
||||
# Get brief of resources filtered by label in the yaml format
|
||||
kubectl get all -l "exo=simple-pod" -o yaml
|
||||
|
||||
# Get a specific field value from a resource (here the pod's IP)
|
||||
kubectl get pod/simple-pod-nginx -o jsonpath='{.status.podIP}'
|
||||
```
|
||||
|
||||
You can also print the logs from the containered application.
|
||||
```bash
|
||||
kubectl logs pod/simple-pod-nginx
|
||||
```
|
||||
|
||||
#### 2. Acces the nginx service
|
||||
By default, the nginx service is listening on port 80. If not already done, you should expose the container port in the pod declaration (see `pod.spec.containers.ports`).
|
||||
|
||||
Then it can be accessed by forwarding your [localhost:8080](http://127.0.0.1:8080) to the pod's port.
|
||||
```bash
|
||||
kubectl port-forward pod/simple-pod 8080:80
|
||||
```
|
||||
<br>
|
||||
|
||||
Surprisingly, the web page shows an error 403.
|
||||
> Can you hypothesize on why it occurred ? Maybe using the application logs.
|
||||
|
||||
<br>
|
||||
|
||||
However we notice that the pod is in a ready state and contradicts the real state of our application.
|
||||
```bash
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/simple-pod-nginx 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
#### 3. Evaluate the readiness of a container
|
||||
|
||||
Try to add a `readinessProbe` to your container definition that leaves the pod in an unready state until the web page is answering with a 200 response code.
|
||||
|
||||
#### 4. Fix the web page
|
||||
|
||||
You should update the web content by uploading the `index.html` to the container web root.
|
||||
> `kubectl cp` can be quite handy
|
||||
|
||||
The page should now display correctly and the pod should hop in a ready state. However while its okay to use `cp` in some cases, here changes won't persist after the pod's lifetime.
|
||||
|
||||
|
||||
#### 5. Storing configuration data
|
||||
|
||||
A ConfigMap is a very handy resource that can hold configuration data for an application.
|
||||
|
||||
You can use the CLI and `kubectl create` to generate the configuration from the index.html file.
|
||||
|
||||
Also you can choose to adjust the following manifest :
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: simple-pod-html
|
||||
data: {}
|
||||
|
||||
```
|
||||
|
||||
Finaly modify the pod declaration in order to mount this configmap at `/usr/share/nginx/html`.
|
||||
|
||||
The application should serve its html content from the configmap and it will allow the configuration to survive reboots.
|
||||
@ -0,0 +1,20 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<meta http-equiv="X-UA-Compatible" content="ie=edge">
|
||||
<title>OK</title>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<div style="
|
||||
text-align: center;
|
||||
background: green;
|
||||
color: white;
|
||||
">
|
||||
<h1>Nice ! Have some pod-ing 🥞</h1>
|
||||
</div>
|
||||
</main>
|
||||
</body>
|
||||
</html>
|
||||
@ -0,0 +1,6 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: simple-pod-html
|
||||
spec: {}
|
||||
@ -0,0 +1,90 @@
|
||||
## Resiliency of pods
|
||||
|
||||
|
||||
#### 1. Simulate a node failure
|
||||
|
||||
A pod is the smallest unit of compute in kubernetes. It is tied to a single node runtime and does not come with a lot of workload logic.
|
||||
|
||||
To ensure that controle plan no schedule pod run this command
|
||||
```bash
|
||||
kubectl taint node/k3d-upec-server-0 node-role.kubernetes.io/master:NoSchedule
|
||||
```
|
||||
|
||||
|
||||
This command prints the node running our simple-pod.
|
||||
|
||||
```bash
|
||||
kubectl get pod/simple-pod-nginx -o jsonpath='{.spec.nodeName}{"\n"}'
|
||||
```
|
||||
|
||||
<br>
|
||||
We will observe the behaviour of this pod in the event of a node failure.
|
||||
|
||||
1. Stop the node where the pod is running with `docker stop <node>`
|
||||
|
||||
2. Check the state of the pod. Is it still running ?
|
||||
|
||||
> Don't forget to run `docker start <node>` before pursuing.
|
||||
|
||||
|
||||
#### 2. Create replicas
|
||||
|
||||
A naive solution to this problem is to create replicas of this pod spread across multiple nodes.
|
||||
|
||||
You will create a `ReplicationController` resource that adds workload logic to our application by maintaining 3 replicas of our pod.
|
||||
|
||||
> The `spec.template` object describes the pod that will be created in the case of insufficient replicas. It is the same as the pod declaration.
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: simple-rc-nginx
|
||||
spec: {}
|
||||
```
|
||||
|
||||
> Be careful in the usage of labels with selectors. Read the documentation.
|
||||
|
||||
<br>
|
||||
As usual, you can monitor the state of the controller and the numbers of ready replicas.
|
||||
|
||||
```bash
|
||||
kubectl get rc/simple-rc-nginx
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
simple-rc-nginx 3 3 3 21s
|
||||
```
|
||||
|
||||
<br>
|
||||
If you delete one pod, the controller should detect it and schedule a new pod to replace it.
|
||||
|
||||
<br>
|
||||
|
||||
Additionally you can scale replicas up and down, either by modifying the resource/manifest or with kubectl.
|
||||
```bash
|
||||
kubectl scale --replicas=5 rc/simple-rc-nginx
|
||||
```
|
||||
|
||||
#### 3. Create a service
|
||||
|
||||
Port forwarding to a pod is not very convenient with multiple replicas. Ideally we need a way to address them in a load balanced manner.
|
||||
A `Service` resource is the standard way of exposing an application inside the cluster. It uses selectors to distribute traffic amongst selected pods.
|
||||
|
||||
Create a `svc.spec.type.clusterIP` service to expose the replicas inside the cluster.
|
||||
|
||||
> You should consider using labelled selectors as you did for the RC.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: simple-rc-nginx
|
||||
spec: {}
|
||||
```
|
||||
|
||||
|
||||
You can then access this service through a forwarded port.
|
||||
|
||||
```bash
|
||||
kubectl port-forward svc/simple-rc-nginx 8080:80
|
||||
```
|
||||
@ -0,0 +1,6 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: simple-rc-nginx
|
||||
spec: {}
|
||||
@ -0,0 +1,6 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: simple-rc-nginx
|
||||
spec: {}
|
||||
@ -0,0 +1,48 @@
|
||||
## Publication of our microservice
|
||||
|
||||
We will create a Deployment resource that provides declarative updates for Pods along with other useful features.
|
||||
|
||||
#### 1. Declare a deployment
|
||||
|
||||
First, the Deployment declaration will be quite similar to the one of our ReplicationController.
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: simple-deploy-nginx
|
||||
spec:
|
||||
```
|
||||
|
||||
|
||||
Once it is running, we will modify the manifest in order to add a basic authentication flow. Deployments can detect changes and perform hot reloads of pods configuration.
|
||||
|
||||
Precisely, it must present some additional elements :
|
||||
|
||||
- a ConfigMap containing the file `auth-nginx.conf` and mounted on `/etc/nginx/conf.d`
|
||||
- a Secret containing the file `.htpasswd` and mounted on `/secrets`
|
||||
|
||||
|
||||
Both ConfigMaps and Secrets store the data the same way (in key/value pairs), but ConfigMaps are meant for plain text data. Secrets values on the other hand, are `base64` encoded as they can contain binary data.
|
||||
|
||||
> `htpasswd -c .htpasswd alice` create a new credential file that contains the MD5 hash of alice's password.
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
data: {}
|
||||
```
|
||||
|
||||
<br>
|
||||
Don't forget to create a service to expose your deployment inside the cluster.
|
||||
|
||||
> You can also scale replicas up and down in a deployment.
|
||||
|
||||
The application should prompt you to enter a login and password before serving the page.
|
||||
|
||||
|
||||
#### 2. Expose your deployment
|
||||
|
||||
Once again, expose your deployment through a ClusterIP that makes it reachable from __inside__ the cluster.
|
||||
@ -0,0 +1,12 @@
|
||||
# mount this file to /etc/nginx/conf.d
|
||||
server {
|
||||
listen 80;
|
||||
|
||||
root /usr/share/nginx/html;
|
||||
|
||||
location / {
|
||||
index index.html;
|
||||
auth_basic "Basic Auth from Kubernetes secret";
|
||||
auth_basic_user_file /secrets/.htpasswd;
|
||||
}
|
||||
}
|
||||
@ -0,0 +1,6 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: simple-deploy-nginx
|
||||
spec: {}
|
||||
@ -0,0 +1,57 @@
|
||||
## Scale workload to match demand
|
||||
|
||||
|
||||
Horizontal scaling means that the response to increased load is to deploy more Pods.
|
||||
|
||||
#### 1. Setup a server-side web app
|
||||
|
||||
We will deploy a simple php webapp that performs the sum of all square roots from 0 to 100000.
|
||||
|
||||
Therefore, you must complete the following manifest :
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: simple-hpa-app
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
name: app
|
||||
spec:
|
||||
containers:
|
||||
- name: php-apache
|
||||
image: php:7.2-apache
|
||||
```
|
||||
|
||||
|
||||
You will need to mount the following `index.php` file to `/var/www/html` (preferably as a configmap).
|
||||
|
||||
```php
|
||||
<?php
|
||||
$x = 0.0001;
|
||||
for ($i = 0; $i <= 1000000; $i++) {
|
||||
$x += sqrt($x);
|
||||
}
|
||||
echo "OK! Sum is $x";
|
||||
?>
|
||||
```
|
||||
|
||||
This script is very simple but it can also be quite intense to compute when forked multiple times.
|
||||
|
||||
#### 2. Set compute requirements
|
||||
|
||||
|
||||
|
||||
You will update the deployment in order to define CPU requirements for each pods. They allow to both reserve and limit the amount of compute resources used by each pod.
|
||||
|
||||
Set the correct values so that each pod c i always between `200m` and `500m`
|
||||
|
||||
> It is also possible to set RAM requirements
|
||||
|
||||
|
||||
#### 3. Testing
|
||||
|
||||
|
||||
This
|
||||
|
||||
|
||||
@ -0,0 +1,6 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: simple-hpa-app
|
||||
spec: {}
|
||||
@ -0,0 +1,6 @@
|
||||
---
|
||||
apiVersion: autoscaling/v2
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: simple-hpa-app
|
||||
spec: {}
|
||||
@ -0,0 +1,7 @@
|
||||
<?php
|
||||
$x = 0.0001;
|
||||
for ($i = 0; $i <= 1000000; $i++) {
|
||||
$x += sqrt($x);
|
||||
}
|
||||
echo "OK! Sum is $x";
|
||||
?>
|
||||
@ -0,0 +1,17 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: simple-hpa-test
|
||||
spec:
|
||||
restartPolicy: Never
|
||||
containers:
|
||||
- name: test
|
||||
image: busybox:latest
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- |
|
||||
while sleep 0.01; do
|
||||
wget -q -O- http://simple-hpa-app
|
||||
done
|
||||
@ -0,0 +1,31 @@
|
||||
## Objectif:
|
||||
Créer une application web avec plusieurs microservices et exposer ces services via un seul Ingress, en utilisant des chemins d'URL différents pour chaque service.
|
||||
|
||||
#### 1. Tâches:
|
||||
|
||||
Créer plusieurs Déploiements:
|
||||
|
||||
Déployer trois applications Node.js distinctes :
|
||||
Un service "frontend" qui sert une page HTML de base.
|
||||
Un service "api" qui fournit une API REST simple (a vous de la definir).
|
||||
Un service "admin" qui contient une interface d'administration sécurisée.
|
||||
|
||||
#### 2. Créer des Services:
|
||||
|
||||
Exposer chacun des services via un Service de type ClusterIP.
|
||||
Créer un Ingress:
|
||||
|
||||
Configurer un Ingress pour router le trafic HTTP vers les différents services en fonction des chemins d'URL :
|
||||
/ : route vers le service "frontend"
|
||||
/api : route vers le service "api"
|
||||
/admin : route vers le service "admin"
|
||||
|
||||
#### 3. Tester:
|
||||
|
||||
Accéder à chaque service en utilisant les chemins d'URL correspondants.
|
||||
|
||||
#### 4. Securiser:
|
||||
|
||||
Ajouter une authentification pour la partie admin.
|
||||
|
||||
|
||||
@ -0,0 +1,59 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
[ -n "$DEBUG" ] && set -e
|
||||
#exec 3>&1 &>/dev/null
|
||||
|
||||
VENV="${1:-.venv}"
|
||||
ACTIVATE="$VENV/bin/activate"
|
||||
if grep -qEi 'debian|ubuntu|mint' /etc/*release; then
|
||||
PKGMANAGER="apt"
|
||||
PKGMANAGER_CACHE="apt update"
|
||||
elif grep -qEi 'fedora|centos|redhat' /etc/*release; then
|
||||
PKGMANAGER="yum"
|
||||
PKGMANAGER_CACHE="yum makecache"
|
||||
else
|
||||
echo "OS is not supported."
|
||||
exit
|
||||
fi
|
||||
|
||||
pkg_exist () {
|
||||
$PKGMANAGER list --installed 2>/dev/null | grep -qi "^$1" || command -v "$1" &>/dev/null
|
||||
}
|
||||
|
||||
install_deps () {
|
||||
|
||||
if ! pkg_exist docker; then
|
||||
apt-get -y install ca-certificates gnupg lsb-release
|
||||
mkdir -p /etc/apt/keyrings
|
||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
|
||||
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||
apt-get update
|
||||
apt-get -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
|
||||
usermod -a -G docker etudiant
|
||||
else
|
||||
echo "[Ok] $(which docker)"
|
||||
fi
|
||||
|
||||
if ! pkg_exist k3d; then
|
||||
wget -q -O - https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
|
||||
else
|
||||
echo "[Ok] $(which k3d)"
|
||||
fi
|
||||
|
||||
BIN=$(echo ${PATH%%:*})
|
||||
mkdir -p "$BIN"
|
||||
|
||||
if ! pkg_exist kubectl; then
|
||||
curl -Ls "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" -o "$BIN/kubectl"
|
||||
chmod +x "$BIN/kubectl"
|
||||
else
|
||||
echo "[Ok] $(which kubectl)"
|
||||
fi
|
||||
}
|
||||
|
||||
install_deps
|
||||
|
||||
echo "######################################################"
|
||||
echo "Use kubectl ..."
|
||||
|
||||
Loading…
Reference in New Issue