Help

Vars editor

Variables in articles are noted {{myVar}}

Legend

A link to a page of this blog
A link to a section of this page
A link to a template of this guide. Templates are files in which you should replace your variables
A variable
A link to an external tool documentation
This page looks best with JavaScript enabled

Make things persistent

 ·  via commit 1c91ff1 (chore: change shortcodes format (HTML tag like)) by Gerkin  ·  ☕ 5 min read
What's on this Page

As you may know, docker (and thus, kubernetes) does not persist anything by default. That means that everytime you restart a pod (container), it is in the exact same state as it was at its first execution, except for the mount points. Those mount points are real hard drive directories injected into your pod. Some apps we’ll setup later will require to persist data, and, more generally, when you’ll run real applications on your own, they will probably use a database or something.

Seeing how this part is highly tied with your specific setup, you should really do this part by yourself using the references above. But in case you want a basic thing working, I’ll guide you through the setup of  CephFS.

❗ I repeat myself, but you should really do this part by yourself. Do not use those as-is if you are using your cluster to host real applications. But just like you, I’m doing tests here and I am tired of walking around the whole internet just to experiment a few things.

Now that I’ve warned you enough (just look above, again), let’s declare a  persistent volume !

Setup CephFS

Check prerequisites

1
dnf install -y lvm2
1
2
3
modprobe rbd
# Auto-load them at startup
echo "rbd" > /etc/modules-load.d/cephfs.conf

Make sure you have at least one unused partition or filesystem.

1
lsblk -f

Then, create resources for Cepth

1
2
3
4
5
6
7
8
9
mkdir -p ./kubernetes/rook/storageclass
git clone --single-branch --branch v1.5.6 https://github.com/rook/rook.git ~/rook
cp ~/rook/cluster/examples/kubernetes/ceph/{crds,common,operator,cluster}.yaml ./kubernetes/rook
# The FileSystem configuration
cp ~/rook/cluster/examples/kubernetes/ceph/filesystem.yaml ./kubernetes/rook/filesystem.yaml
# Filesystem storage class base config. See https://rook.io/docs/rook/v1.5/ceph-filesystem.html
cp ~/rook/cluster/examples/kubernetes/ceph/csi/cephfs/storageclass.yaml ./kubernetes/rook/storageclass/filesystem.yaml
# Block storage class base config. See https://rook.io/docs/rook/v1.5/ceph-block.html
cp ~/rook/cluster/examples/kubernetes/ceph/csi/rbd/storageclass.yaml ./kubernetes/rook/storageclass/block.yaml

Take some time to RTFM and configure operator, cluster, filesystem, storageclass/filesystem & storageclass/block.

Want to store on your master node ? Update the discover tolerations to be scheduled on the master node by modifying ./kubernetes/rook/operator.yaml like below

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
# ...
kind: CephCluster
metadata:
  name: rook-ceph
spec:
  placement:
    all: 
      tolerations:
        - effect: NoSchedule
          key: node-role.kubernetes.io/controlplane
          operator: Exists
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
          operator: Exists
1
2
3
4
5
6
7
kubectl apply -f ./kubernetes/rook/crds.yaml
kubectl apply -f ./kubernetes/rook/common.yaml
kubectl apply -f ./kubernetes/rook/operator.yaml
kubectl apply -f ./kubernetes/rook/cluster.yaml
kubectl apply -f ./kubernetes/rook/filesystem.yaml
kubectl apply -f ./kubernetes/rook/storageclass/block.yaml
kubectl apply -f ./kubernetes/rook/storageclass/filesystem.yaml

Configure the rook cluster

Open your ./kubernetes/rook/cluster.yaml file for further customization.

  • The default settings save rook data in /var/lib/rook. This can be changed by setting dataDirHostPath.
  • If working with 1 or 2 workers, make sure that spec.mon.count is equal to 1 (only for testing purposes).
  • It is highly advised to explicitly set spec.storage
1
kubectl apply -f ./kubernetes/rook/cluster.yaml

Create a test PVC

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
apiVersion: v1
kind: Namespace
metadata:
  name: persistent-nginx
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pv-claim
  namespace: persistent-nginx
spec:
  storageClassName: rook-cephfs
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nginx
  namespace: persistent-nginx
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
        - name: test-pv-storage
          persistentVolumeClaim:
            claimName: test-pv-claim
      containers:
        - name: nginx
          image: nginx
          ports:
            - name: web
              containerPort: 80
          volumeMounts:
            - mountPath: "/usr/share/nginx/html"
              name: test-pv-storage
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: persistent-nginx
spec:
  ports:
    - protocol: TCP
      name: web
      port: 80
  selector:
    app: nginx
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: ingress-tls
  namespace: persistent-nginx # Must be the same as the service
spec:
  entryPoints:
    - websecure
  routes:
  - match: Host(`persistent-nginx.scitizen.loc`)
    kind: Rule
    services:
    # Beware: the service MUST be in the same namespace than the IngressRoute.
    - name: nginx
      kind: Service
      port: 80
  tls:
    certResolver: myresolver
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: ingress-notls
  namespace: persistent-nginx # Must be the same as the service
spec:
  entryPoints:
    - web
  routes:
  - match: Host(`persistent-nginx.scitizen.loc`)
    kind: Rule
    services:
    # Beware: the service MUST be in the same namespace than the IngressRoute.
    - name: nginx
      kind: Service
      port: 80
1
2
3
kubectl apply -f ./kubernetes/rook/xx-PersistentNginx.yaml
kubectl -n persistent-nginx get pvc -w
kubectl -n persistent-nginx exec -it deploy/nginx -- bash -c 'echo "Hello world !" | tee /usr/share/nginx/html/index.html'

Going to  http://persistent-nginx.{{cluster.baseHostName}}/index.html should display Hello world !. And if you kill and restart your container, your data will be kept ;).

1
kubectl -n persistent-nginx exec -it deploy/nginx -- /bin/sh -c "kill 1"

Dashboard

If you’re planning to expose the dashboard from outside of the cluster, you have to disable spec.dashboard.ssl to false, since traefik will do the SSL encription. Then, deploy the routing:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
apiVersion: v1
kind: Service
metadata:
  name: rook-ceph-mgr-dashboard
  namespace: rook-ceph # namespace:cluster
  labels:
    app: rook-ceph-mgr
    rook_cluster: rook-ceph # namespace:cluster
spec:
  ports:
  - name: dashboard
    port: 8443
    protocol: TCP
    targetPort: 8443
  selector:
    app: rook-ceph-mgr
    rook_cluster: rook-ceph

---

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: ingress-dashboard-tls
  namespace: rook-ceph # Must be the same as the service
spec:
  entryPoints:
    - websecure
  routes:
  - match: Host(`ceph.{{cluster.baseHostName}}`)
    kind: Rule
    services:
    # Beware: the service MUST be in the same namespace than the IngressRoute.
    - name: rook-ceph-mgr-dashboard
      kind: Service
      port: 8443
  tls:
    certResolver: myresolver
1
kubectl apply -f ./kubernetes/rook/dashboard-ingress.yaml

After this, you should be able to access to ceph dashboard via  https://ceph.{{cluster.baseHostName}}


  • The default username is admin.
  • The default password is stored in a secret. To get it, run:
    1
    
    kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
    

Having problems ?  RTFM

If you made errors and want to purge rook-ceph, remove following patterns on each nodes running cephfs (usually, all the worker nodes):

1
2
3
4
rm -r /var/lib/rook
rm -r /var/lib/kubelet/plugins/rook*
rm -r /var/lib/kubelet/plugins_registry/rook*
wipefs --all --force /dev/sdX # Each of the used disks

See   rook-ceph-crash-collector-keyring secret not created for crash reporter · Issue #4553 · rook/rook · GitHub



Hey, we’ve done important things here ! Maybe it’s time to commit…

1
2
3
4
git add .
git commit -m "Make things persistent

Following guide @ https://gerkindev.github.io/devblog/walkthroughs/kubernetes/05-storage/"
Share on

GerkinDev
WRITTEN BY
GerkinDev
Fullstack developer, on its journey to DevOps.

 
What's on this Page