Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Nfs of K8s shared storage

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Special note: test use, production environment is not recommended

1. Configure on the master node (node1)

1) yum install nfs

# yum-y install nfs-utils

Key tools for NFS include:

Main configuration file: / etc/exports

NFS file system maintenance command: / usr/bin/exportfs

Log file for shared resources: / var/lib/nfs/*tab

Client query shared resource command: / usr/sbin/showmount

Port configuration: / etc/sysconfig/nfs.

2) create a new shared directory

# mkdir-p / data/volunes

3) set Nfs permissions

# cat / etc/exports

Permission parameter description:

Ro read-only access

Rw read-write access

Sync all data is written to the share on request

Async nfs can respond to requests before writing data

Secure nfs is sent over a secure TCP/IP port below 1024

Insecure nfs is sent over more than 1024 ports

Wdelay if multiple users want to write to the nfs directory, write in groups (default)

No_wdelay if more than one user wants to write to the nfs directory, write immediately, and this setting is not required when using async

Hide does not share its subdirectories in the nfs shared directory

A subdirectory of the no_hide shared nfs directory

Subtree_check forces nfs to check the permissions of the parent directory when sharing a subdirectory such as / usr/bin (default)

No_subtree_check does not check parent directory permissions

UID and GID mapping of all_squash shared files anonymous user anonymous, suitable for public directories

No_all_squash retains UID and GID of shared files (default)

All requests from root_squash root users are mapped to the same permissions as anonymous users (default)

The no_root_squash root user has full administrative access to the root directory (not secure)

Anonuid=xxx specifies the UID of anonymous users in the nfs server / etc/passwd file

Anongid=xxx specifies the GID of anonymous users in the nfs server / etc/passwd file

4) start the service

# systemctl enable nfs.service

# systemctl start nfs.service

# exportfs-arv # the configuration file will take effect without restarting the nfs service

2. Configure Nfs on nodes node2 and node3

1) installation

Yum-y install nfs-utils

2) Mount on node3 and node2:

# mount-t nfs 172.160.45.160:/data/volunes/ / mnt

3. Check

1) # View the registration status of the RPC service

Rpcinfo-p localhost

2) # showmount Test

Showmount-e 172.160.45.160

4. There are two ways for kubernetes to use NFS shared storage:

# manually and statically create required PV and PVC

# create a corresponding PV dynamically by creating a PVC, without the need to manually create a PV

# static creation #

1) apply for PV volumes statically

PersistentVolumes is the abstract storage resource of K8s, which mainly includes key information such as storage capacity, access mode, storage type, recycling strategy and so on. PV is the real entrance to the actual docking storage backend of K8s.

# create a corresponding directory for pv

# mkdir-p / data/volunes/v {1pm 2pm 3}

# configure exportrs

# cat / etc/exports

# effective immediately

# exportfs-arv

2) create a pv file

# vim nfs-pv1.yaml

ApiVersion: v1

Kind: PersistentVolume

Metadata:

Name: nfs-pv1

Labels:

Pv: nfs-pv1

Spec:

Capacity:

Storage: 1Gi

AccessModes:

-ReadWriteMany

PersistentVolumeReclaimPolicy: Recycle

StorageClassName: nfs

Nfs:

Path: / data/volunes/v1

Server: 172.160.45.160

Nfs-pv2.yaml is similar

Configuration instructions:

① capacity specifies that the capacity of the PV is 1G.

② accessModes specifies ReadWriteOnce as the access mode, and the supported access modes are:

ReadWriteOnce (RWO): read and write permissions, but can only be mounted by a single node

ReadOnlyMany (ROX): read-only permission, which can be mounted by multiple nodes

ReadWriteMany (RWX): read and write permissions, which can be mounted by multiple nodes

③ persistentVolumeReclaimPolicy specifies that when the recycling policy of PV is Recycle, the supported policies are:

Retain (reserved)-data is retained and will no longer be assigned to pvc, requiring the administrator to manually clean up the data

Recycle (Recycling)-clears the data in PV and reserves pv resources that can be reserved for use by other pvc

Delete (Delete)-deletes the entire pv resource and internal data

④ storageClassName specifies that the class of PV is nfs. Equivalent to setting a classification for PV, PVC can specify the PV for the class to apply for the corresponding class.

⑤ specifies the directory where PV corresponds on the NFS server.

# create a pod for pv

# kubectl create-f nfs-pv1.yaml

# kubectl get pv

Status Available, indicating that pv is ready and can be requested by PVC

3) create a PVC

PersistentVolumeClaims is a declaration of PV resources. After pvc binds entity resources pv, pod binds pvc to use pv resources. PVC is an abstract middle layer of the management mode of declaratively binding storage resources in k8s. Pod cannot use storage resources directly through pv and must go through pvc. Pvc must be bound to pv entities before it can be used by pod.

# vim nfs-pvc2.yaml

ApiVersion: v1

Kind: PersistentVolumeClaim

Metadata:

Name: nfs-pvc2

Spec:

AccessModes:

-eadWriteMany

Resources:

Requests:

Storage: 1Gi

StorageClassName: nfs

Selector:

MatchLabels:

Pv: nfs-pv2

Execute yaml file to create pvc

# kubectl create-f nfs-pvc1.yaml

View pv resources

Note: pv corresponds to the bound PVC

4) create a pod

[root@node1 yaml] # vim nfs-nginx.yaml

ApiVersion: v1

Kind: ReplicationController

Metadata:

Name: nginx-test

Labels:

Name: nignx-test

Spec:

Replicas: 3

Selector:

Name: nginx-test

Template:

Metadata:

Labels:

Name: nginx-test

Spec:

Containers:

-name: web01

Image: docker.io/nginx:1.14.2

VolumeMounts:

-mountPath: "/ usr/share/nginx/html/"

Name: nfs-pv1

-mountPath: "/ var/log/nginx/"

Name: nfs-pv2

Ports:

-containerPort: 80

Volumes:

-name: nfs-pv1

PersistentVolumeClaim:

ClaimName: nfs-pvc1

-name: nfs-pv2

PersistentVolumeClaim:

ClaimName: nfs-pvc2

Remarks:

# nfs-pv1 stores web page files and nfs-pv2 log files

# execution file

# kubectl create-f nfs-nginx.yaml

# kubectl get pods-o wide

# create nfs-nginx-server file

# cat nfs-nginx-svc.yaml

ApiVersion: v1

Kind: Service

Metadata:

Name: nginx-test

Labels:

Name: nginx-test

Spec:

Type: NodePort

Ports:

-port: 80

Protocol: TCP

TargetPort: 80

Name: http

NodePort: 30088

Selector:

Name: nginx-test

# execution file

# kubectl create-f nfs-nginx-svc.yaml

# kubectl get svc

5) verify that PV is available

Test page:

# Internal

# Public network

Remarks: nfs itself does not have the ability of redundancy, and it is easy to lose data when the data disk is damaged. It is recommended to use glusterfs or cephfs distributed storage

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report