Warning
This content has been generated by machine translation. The translations are automated and have not undergone human review or validation.
1.6.2 Using File Storage (based on CSI Driver)
The container’s internal storage is basically destroyed when the container is deleted or terminated. A separate Persistent Volume is used to protect data from disappearing and to store data that needs to be preserved.
Persistent Volumes allow you to use the popular Network File System (NFS) as storage for file sharing. NFS, a network file system, supports simultaneous writes, so it can be used for simultaneous reading and writing by multiple PODs in Kubernetes. In OCI, OCI File Storage Service (FSS) is the NFS service provided by OCI. Now let’s see how to use RWX access mode with OCI File Storage as Persistent Volume in OKE.
Update
Create Files Storage
Create File Storage by referring to related documents.
Log in to the OCI console.
Go to Storage > File Storage from the top left hamburger menu
Check the target compartment.
Under File Systems, click Create File System.
In the basic setting screen, simply modify the information below to suit the desired value and create it.
- File System Information:
- Name
- Mount Target Information:
- New Mount Target Name
- Virtual Cloud Network
- Subnet
- File System Information:
Check the creation result
Go to the Mount Target details created in File Storage > Mount Target and check the following information:
- Mount Target OCID: …sc2mia
- IP Address: ex) 10.0.20.194
- Export Path: ex) /OKE-FFS-Storage
Security List Settings
When creating a file system, add a rule for the file storage service to the security list in the subnet of the mount target.
Using Persistent Volume with File Storage Service
Create a Persistent Volume (PV)
- spec.csi.driver: use the new fss.csi.oraclecloud.com
- Set as follows in the format spec.csi.volumeHandle:
: : - spec.accessModes: FFS CSI Driver supports RWX - ReadWriteMany, so set ReadWriteMany access mode for testing
apiVersion: v1 kind: PersistentVolume metadata: name: oke-fss-pv spec: capacity: storage: 50Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain csi: driver: fss.csi.oraclecloud.com volumeHandle: ocid1.filesystem.oc1.ap_seoul_1.aaaaaaaaaaabgsxcpfxhsllqojxwiotboawwg2dvnzrwqzlpnywtcllbmqwtcaaa:10.0.20.194:/OKE-FFS-Storage
Create a Persistent Volume Claime (PVC)
- storageClassName: set to ""
- volumeName: Specify the name of the PV you created earlier
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: oke-fss-pvc spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 50Gi volumeName: oke-fss-pv
Deploying POD Using PVC
Register the created PVC as a volume and mount it.
Because ReadWriteMany access mode is used, multiple replicas can be specified unlike the previous example using Block Volume with PV.
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx-fss-pvc name: nginx-fss-pvc spec: replicas: 3 selector: matchLabels: app: nginx-fss-pvc template: metadata: labels: app: nginx-fss-pvc spec: containers: - name: nginx image: nginx:latest volumeMounts: - name: data mountPath: /usr/share/nginx/html volumes: - name: data persistentVolumeClaim: claimName: oke-fss-pvc
Examples of execution and results
Although the 3 PODs are located in 3 different Worker Nodes, you can see that they are started normally.
oke_admin@cloudshell:file-storage (ap-seoul-1)$ kubectl apply -f oke-fss-pv.yaml persistentvolume/oke-fss-pv created oke_admin@cloudshell:file-storage (ap-seoul-1)$ kubectl apply -f oke-fss-pvc.yaml persistentvolumeclaim/oke-fss-pvc created oke_admin@cloudshell:file-storage (ap-seoul-1)$ kubectl get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/oke-fss-pv 50Gi RWX Retain Bound default/oke-fss-pvc 15m NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/oke-fss-pvc Bound oke-fss-pv 50Gi RWX 15m oke_admin@cloudshell:file-storage (ap-seoul-1)$ kubectl apply -f nginx-deployment-fss-pvc.yaml deployment.apps/nginx-fss-pvc created oke_admin@cloudshell:file-storage (ap-seoul-1)$ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-fss-pvc-9fb98454f-kgmbd 1/1 Running 0 16m 10.244.0.131 10.0.10.124 <none> <none> nginx-fss-pvc-9fb98454f-qxwg7 1/1 Running 0 16m 10.244.1.3 10.0.10.186 <none> <none> nginx-fss-pvc-9fb98454f-tltbx 1/1 Running 0 16m 10.244.0.4 10.0.10.157 <none> <none>
File write test
- Although the file was written from the first POD to PV as shown below, the same contents can be checked in all PODs.
oke_admin@cloudshell:file-storage (ap-seoul-1)$ kubectl exec -it nginx-fss-pvc-9fb98454f-kgmbd -- bash -c 'echo "Hello FSS from 10.0.10.124" >> /usr/share/nginx/html/hello_world.txt' oke_admin@cloudshell:file-storage (ap-seoul-1)$ kubectl exec -it nginx-fss-pvc-9fb98454f-kgmbd -- cat /usr/share/nginx/html/hello_world.txt Hello FSS from 10.0.10.124 oke_admin@cloudshell:file-storage (ap-seoul-1)$ kubectl exec -it nginx-fss-pvc-9fb98454f-qxwg7 -- cat /usr/share/nginx/html/hello_world.txt Hello FSS from 10.0.10.124 oke_admin@cloudshell:file-storage (ap-seoul-1)$ kubectl exec -it nginx-fss-pvc-9fb98454f-tltbx -- cat /usr/share/nginx/html/hello_world.txt Hello FSS from 10.0.10.124
References
As an individual, this article was written with my personal time. There may be errors in the content of the article, and the opinions in the article are personal opinions.