Sunday, September 12, 2021

File Sharing using NFS in GKE Cluster

 File Sharing using NFS in GKE Cluster


There was a requirement to create common file sharing location which should be accessible by specific pods distributed in all 4 worker nodes. PV and PVC is not suitable as PV disk can be mount to a pod which located in same zone.

There are different methods to overcome this issue and I selected NFS server method.

There are few steps:

1. Create NFS server deployment and expose it as a service
2. Mount PVC to above mentioned NFS deployment
3. Create PVC using NFS service
4. Mount PVC into required pods

I will explain above steps in details here.

01. Create persistent volume claim in GKE 

below code will create disk of 5GB which is used by persistent volume claim
       
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pv-provisioning
  labels:
    demo: nfs-pv-provisioning
spec:
  accessModes: [ "ReadWriteOnce" ]
  resources:
    requests:
      storage: 5Gi

02: Create NFS server deployment

This code will create NFS server deployment with created PVS
       
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-server
  template:
    metadata:
      labels:
        app: nfs-server
    spec:
      containers:
      - name: nfs-server
        image: k8s.gcr.io/volume-nfs:0.8
        ports:
          - name: nfs
            containerPort: 2049
          - name: mountd
            containerPort: 20048
          - name: rpcbind
            containerPort: 111
        securityContext:
          privileged: true
        volumeMounts:
          - mountPath: /resources
            name: mypvc
      volumes:
        - name: mypvc
          persistentVolumeClaim:
            claimName: nfs-pv-provisioning


03: Expose NFS server as service

Now we can expose NFS server as a service.
Here is faced an issue while creating the service using yam file. then I managed to expose using GCP console.
       
kind: Service
apiVersion: v1
metadata:
  name: nfs-service
spec:
  ports:
    - name: nfs
      port: 2049
    - name: mountd
      port: 20048
    - name: rpcbind
      port: 111
  selector:
    name: nfs-server

Once done need to check the cluster IP of NFS service. you will need it to configure pods' PVC as mentioned below

04: Run sample pod with NFS disk mount

Next need to create a persistent volume and persistent volume claim using NFS service.
       
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: <cluster IP of NFS service>
    path: "/"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 100Mi


Now here is a sample pod which is mounted to NFS service.
       
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-busybox
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nfs-busybox
  template:
    metadata:
      labels:
        app: nfs-busybox
    spec:
      containers:
      - image: busybox
        command:
          - sh
          - -c
          - 'while true; do date > /mnt/index-`hostname`.html; hostname >> /mnt/index-`hostname`.html; sleep $(($RANDOM % 5 + 5)); done'
        imagePullPolicy: IfNotPresent
        name: busybox
        volumeMounts:
          # name must match the volume name below
          - name: nfs
            mountPath: "/mnt"
      volumes:
      - name: nfs
        persistentVolumeClaim:
          claimName: nfs

I have scaled the deployment to simulate the shared disk over zones.







/mnt disk is mounted to NFS service successfully. 










No comments:

Post a Comment

File Sharing using NFS in GKE Cluster

 File Sharing using NFS in GKE Cluster There was a requirement to create common file sharing location which should be accessible by specific...