As you may know, docker (and thus, kubernetes) does not persist anything by default. That means that everytime you restart a pod (container), it is in the exact same state as it was at its first execution, except for the mount points. Those mount points are real hard drive directories injected into your pod. Some apps weโll setup later will require to persist data, and, more generally, when youโll run real applications on your own, they will probably use a database or something.
Seeing how this part is highly tied with your specific setup, you should really do this part by yourself using the references above. But in case you want a basic thing working, Iโll guide you through the setup of
CephFS.
โ I repeat myself, but you should really do this part by yourself. Do not use those as-is if you are using your cluster to host real applications. But just like you, Iโm doing tests here and I am tired of walking around the whole internet just to experiment a few things.
Now that Iโve warned you enough (just look above, again), letโs declare a
persistent volume !
modprobe rbd
# Auto-load them at startupecho"rbd" > /etc/modules-load.d/cephfs.conf
Make sure you have at least one unused partition or filesystem.
1
lsblk -f
Then, create resources for Cepth
1
2
3
4
5
6
7
8
9
mkdir -p ./kubernetes/rook/storageclass
git clone --single-branch --branch v1.5.6 https://github.com/rook/rook.git ~/rook
cp ~/rook/cluster/examples/kubernetes/ceph/{crds,common,operator,cluster}.yaml ./kubernetes/rook
# The FileSystem configurationcp ~/rook/cluster/examples/kubernetes/ceph/filesystem.yaml ./kubernetes/rook/filesystem.yaml
# Filesystem storage class base config. See https://rook.io/docs/rook/v1.5/ceph-filesystem.htmlcp ~/rook/cluster/examples/kubernetes/ceph/csi/cephfs/storageclass.yaml ./kubernetes/rook/storageclass/filesystem.yaml
# Block storage class base config. See https://rook.io/docs/rook/v1.5/ceph-block.htmlcp ~/rook/cluster/examples/kubernetes/ceph/csi/rbd/storageclass.yaml ./kubernetes/rook/storageclass/block.yaml
Take some time to RTFM and configure operator, cluster, filesystem, storageclass/filesystem & storageclass/block.
Want to store on your master node ? Update the discover tolerations to be scheduled on the master node by modifying ./kubernetes/rook/operator.yaml like below
apiVersion:v1kind:Namespacemetadata:name:persistent-nginx---apiVersion:v1kind:PersistentVolumeClaimmetadata:name:test-pv-claimnamespace:persistent-nginxspec:storageClassName:rook-cephfsvolumeMode:FilesystemaccessModes:- ReadWriteOnceresources:requests:storage:5Gi---kind:DeploymentapiVersion:apps/v1metadata:name:nginxnamespace:persistent-nginxlabels:app:nginxspec:replicas:1selector:matchLabels:app:nginxtemplate:metadata:labels:app:nginxspec:volumes:- name:test-pv-storagepersistentVolumeClaim:claimName:test-pv-claimcontainers:- name:nginximage:nginxports:- name:webcontainerPort:80volumeMounts:- mountPath:"/usr/share/nginx/html"name:test-pv-storage---apiVersion:v1kind:Servicemetadata:name:nginxnamespace:persistent-nginxspec:ports:- protocol:TCPname:webport:80selector:app:nginx---apiVersion:traefik.containo.us/v1alpha1kind:IngressRoutemetadata:name:ingress-tlsnamespace:persistent-nginx# Must be the same as the servicespec:entryPoints:- websecureroutes:- match:Host(`persistent-nginx.scitizen.loc`)kind:Ruleservices:# Beware: the service MUST be in the same namespace than the IngressRoute.- name:nginxkind:Serviceport:80tls:certResolver:myresolver---apiVersion:traefik.containo.us/v1alpha1kind:IngressRoutemetadata:name:ingress-notlsnamespace:persistent-nginx# Must be the same as the servicespec:entryPoints:- webroutes:- match:Host(`persistent-nginx.scitizen.loc`)kind:Ruleservices:# Beware: the service MUST be in the same namespace than the IngressRoute.- name:nginxkind:Serviceport:80
1
2
3
kubectl apply -f ./kubernetes/rook/xx-PersistentNginx.yaml
kubectl -n persistent-nginx get pvc -w
kubectl -n persistent-nginx exec -it deploy/nginx -- bash -c 'echo "Hello world !" | tee /usr/share/nginx/html/index.html'
Going to
http://persistent-nginx./index.html should display Hello world !. And if you kill and restart your container, your data will be kept ;).
If youโre planning to expose the dashboard from outside of the cluster, you have to disable spec.dashboard.ssl to false, since traefik will do the SSL encription. Then, deploy the routing:
apiVersion:v1kind:Servicemetadata:name:rook-ceph-mgr-dashboardnamespace:rook-ceph# namespace:clusterlabels:app:rook-ceph-mgrrook_cluster:rook-ceph# namespace:clusterspec:ports:- name:dashboardport:8443protocol:TCPtargetPort:8443selector:app:rook-ceph-mgrrook_cluster:rook-ceph---apiVersion:traefik.containo.us/v1alpha1kind:IngressRoutemetadata:name:ingress-dashboard-tlsnamespace:rook-ceph# Must be the same as the servicespec:entryPoints:- websecureroutes:- match:Host(`ceph.`)kind:Ruleservices:# Beware: the service MUST be in the same namespace than the IngressRoute.- name:rook-ceph-mgr-dashboardkind:Serviceport:8443tls:certResolver:myresolver
If you made errors and want to purge rook-ceph, remove following patterns on each nodes running cephfs (usually, all the worker nodes):
1
2
3
4
rm -r /var/lib/rook
rm -r /var/lib/kubelet/plugins/rook*
rm -r /var/lib/kubelet/plugins_registry/rook*
wipefs --all --force /dev/sdX # Each of the used disks