Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Nacos cluster deployment-k8s environment

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Deploy Nacos on Kubernetes through StatefulSets

For rapid deployment, please refer to the official website https://nacos.io/en-us/docs/use-nacos-with-kubernetes.html.

1 Rapid deployment

Git clone https://github.com/nacos-group/nacos-k8s.git cd nacos-k8schmod + x quick-startup.sh./quick-startup.sh

1.2 Service testing

Service Registration curl-X PUT 'http://cluster-ip:8848/nacos/v1/ns/instance?serviceName=nacos.naming.serviceName&ip=20.18.7.10&port=8080' Service Discovery curl-X GET' http://cluster-ip:8848/nacos/v1/ns/instances?serviceName=nacos.naming.serviceName' publish configuration curl-X POST "http://cluster-ip:8848/nacos/v1/cs/configs?dataId=nacos.cfg.dataId&group=test&content=helloWorld" get configuration curl-X GET "http://cluster-ip:8848/nacos/v1/cs/configs?dataId=nacos.cfg.dataId&group=test"

2. Deploy in NFS mode.

NFS is used to preserve data, database data and nacos data logs, etc.

Deployment in this way requires modification of the official yaml. The following is a list of available steps and yaml files for actual testing.

2.1 deploy the NFS service environment

Find a 192.168.1.10 intranet machine that can communicate with the K8s environment, deploy the nfs service on the machine, and select the appropriate disk as the shared directory.

Yum install-y nfs-utils rpcbindmkdir-p / data/nfsmkdir-p / data/mysql_master mkdir-p / data/mysql_slavevim / etc/exports/data/nfs * (insecure,rw,async,no_root_squash) / data/mysql_slave * (insecure,rw,async,no_root_squash) / data/mysql_master * (insecure,rw,async,no_root_squash) systemctl start rpcbindsystemctl start nfssystemctl enable rpcbindsystemctl enable nfs-serverexportfs-ashowmount-e

Deploy nfs on 2.2 k8s

The total amount of cd nacos-k8s/deploy/nfs/ [root@localhost nfs] # ll is 12m / r / r 1 root root 153.Oct 15 08:05 class.yaml-rw-r--r--. 1 root root 877 October 15 14:37 deployment.yaml-rw-r--r--. 1 root root 1508 October 1508: 05 rbac.yaml

2.2.1 create a rbac using the default rbac.yaml without modification, using the default namespace. If you need to deploy to a specific namespace, modify the namespace in it.

Kubectl create-f rbac.yaml

Kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: nfs-client-provisioner-runnerrules:- apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"]-apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch" "update"]-apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"]-apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"]-apiGroups: [""] resources: ["events"] verbs: ["create", "update" "patch"]-kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: ServiceAccount name: nfs-client-provisioner namespace: defaultroleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisionerrules:- apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list" "watch", "create", "update", "patch"]-kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisionersubjects:- kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultroleRef: kind: Role name: leader-locking-nfs-client-provisioner

2.2.2 create ServiceAccount and deploy NFS-Client Provisioner

Kubectl create-f deployment.yaml # # modify ip and directory

ApiVersion: v1kind: ServiceAccountmetadata: name: nfs-client-provisioner---kind: DeploymentapiVersion: extensions/v1beta1metadata: name: nfs-client-provisionerspec: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisionerspec: serviceAccount: nfs-client-provisioner containers:-name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: -name: nfs-client-root mountPath: / persistentvolumes env:-name: PROVISIONER_NAME value: fuseim.pri/ifs-name: NFS_SERVER value: 192.168.1.10-name: NFS_PATH value: / data/nfs volumes:-name: nfs-client-root nfs: server: 192. 168.1.10 path: / data/nfs

2.2.3 create NFS StorageClass

Kubectl create-f class.yaml # # No need to modify yaml

ApiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: managed-nfs-storageprovisioner: fuseim.pri/ifsparameters: archiveOnDelete: "false"

2.3 deploy the database

Cd nacos-k8s/deploy/mysql/

2.3.1 deploy the master database

Kubectl create-f mysql-master-nfs.yaml # # what needs to be modified is the IP and directory of nfs

ApiVersion: v1kind: ReplicationControllermetadata: name: mysql-master labels: name: mysql-masterspec: replicas: 1 selector: name: mysql-master template: metadata: labels: name: mysql-masterspec: containers:-name: master image: nacos/nacos-mysql-master:latest ports:-containerPort: 3306 volumeMounts:-name: mysql-master-data mountPath: / var/lib/mysql env:-name: MYSQL_ROOT_PASSWORD value: "root"-name: MYSQL_DATABASE value: "nacos_devtest"-name: MYSQL_USER value: "nacos"-name: MYSQL_PASSWORD value: "nacos"-name: MYSQL_REPLICATION_USER value: 'nacos_ru' -name: MYSQL_REPLICATION_PASSWORD value: 'nacos_ru' volumes:-name: mysql-master-data nfs: server: 192.168.1.10 path: / data/mysql_master---apiVersion: v1kind: Servicemetadata: name: mysql-master labels: name: mysql-masterspec: ports:-port: 3306 targetPort: 3306 selector: name: mysql-master

2.3.2 deploy from the database

Kubectl create-f mysql-slave-nfs.yaml

ApiVersion: v1kind: ReplicationControllermetadata: name: mysql-slave labels: name: mysql-slavespec: replicas: 1 selector: name: mysql-slave template: metadata: labels: name: mysql-slavespec: containers:-name: slave image: nacos/nacos-mysql-slave:latest ports:-containerPort: 3306 volumeMounts:-name: mysql-slave-data mountPath: / var/lib/mysql env:-name: MYSQL_ROOT_PASSWORD value: "root"-name: MYSQL_REPLICATION_USER value: 'nacos_ru'-name: MYSQL_REPLICATION_PASSWORD value:' nacos_ru' volumes:-name: mysql-slave-data nfs: server: 192.168.1.10 Path: / data/mysql_slave---apiVersion: v1kind: Servicemetadata: name: mysql-slave labels: name: mysql-slavespec: ports:-port: 3306 targetPort: 3306 selector: name: mysql-slave

2.4 deployment of nacos

Cd nacos-k8s/deploy/nacos/

Kubectl create-f nacos-pvc-nfs.yaml # # this file needs to be greatly modified, mainly to add mounting based on the quickstart version, and to clean up other irrelevant content, as shown below

Note-name: NACOS_SERVERS this item, the domain name will automatically generate such a cluster.local,nacos-0.nacos-headless.default.svc.cluster.local:8848 when it is created, and my K8s default cluster name is set to cluster.test. So the nacos-0.nacos-headless.default.svc.cluster.test:8848 in the file is modified like this.

-apiVersion: v1kind: Servicemetadata: name: nacos-headless labels: app: ports:-port: 8848 name: server targetPort: 8848 selector: app: nacos---apiVersion: v1kind: ConfigMapmetadata: name: nacos-cmdata: mysql.master.db.name: "nacos_devtest" mysql.master.port: "3306" mysql.slave.port: "3306" mysql.master.user: "nacos" mysql.master. Password: "nacos"-apiVersion: apps/v1kind: StatefulSetmetadata: name: nacosspec: serviceName: nacos-headless replicas: 3 template: metadata: labels: app: nacos annotations: pod.alpha.kubernetes.io/initialized: "true" spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution:-labelSelector: matchExpressions: -key: "app" operator: In values:-nacos-headless topologyKey: "kubernetes.io/hostname" containers:-name: k8snacos imagePullPolicy: Always image: nacos/nacos-server:latest resources: requests: memory: "2Gi" cpu: "500m" ports:-containerPort: 8848 name: client env:-name: NACOS_REPLICAS value: "3"-name: SERVICE_NAME value: "nacos-headless"-name: POD_NAMESPACE valueFrom: FieldRef: apiVersion: v1 fieldPath: metadata.namespace-name: MYSQL_MASTER_SERVICE_DB_NAME valueFrom: configMapKeyRef: name: nacos-cm key: mysql.master.db.name-name: MYSQL_MASTER_SERVICE_PORT ValueFrom: configMapKeyRef: name: nacos-cm key: mysql.master.port-name: MYSQL_SLAVE_SERVICE_PORT valueFrom: configMapKeyRef: name: nacos-cm key: mysql.slave.port-name: MYSQL_MASTER_SERVICE _ USER valueFrom: configMapKeyRef: name: nacos-cm key: mysql.master.user-name: MYSQL_MASTER_SERVICE_PASSWORD valueFrom: configMapKeyRef: name: nacos-cm key: mysql.master.password-name NACOS_SERVER_PORT value: "8848"-name: PREFER_HOST_MODE value: "hostname"-name: NACOS_SERVERS value: "nacos-0.nacos-headless.default.svc.cluster.test:8848 nacos-1.nacos-headless.default.svc.cluster.test:8848 nacos-2.nacos-headless.default.svc.cluster.test:8848" VolumeMounts:-name: datadir mountPath: / home/nacos/data-name: logdir mountPath: / home/nacos/logs volumeClaimTemplates:-metadata: name: datadir annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: ["ReadWriteMany"] Resources: requests: storage: 5Gi-metadata: name: logdir annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: ["ReadWriteMany"] resources: requests: storage: 5Gi selector: matchLabels: app: nacos

You can also use this original

-apiVersion: v1kind: Servicemetadata: name: nacos-headless labels: app: nacos annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" spec: ports:-port: 8848 name: server targetPort: 8848 clusterIP: None selector: app: nacos---apiVersion: v1kind: ConfigMapmetadata: name: nacos-cmdata: mysql.master.db.name: "nacos_devtest" mysql.master.port: "3306" mysql. Slave.port: "3306" mysql.master.user: "nacos" mysql.master.password: "nacos"-apiVersion: apps/v1kind: StatefulSetmetadata: name: nacosspec: nacos-headless replicas: 2 template: metadata: labels: app: nacos annotations: pod.alpha.kubernetes.io/initialized: "true" spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: -labelSelector: matchExpressions:-key: "app" operator: In values:-nacos topologyKey: "kubernetes.io/hostname" serviceAccountName: nfs-client-provisioner initContainers:-name: peer-finder-plugin-install image: nacos / nacos-peer-finder-plugin:latest imagePullPolicy: Always volumeMounts:-mountPath: "/ home/nacos/plugins/peer-finder" name: plugindir containers:-name: nacos imagePullPolicy: Always image: nacos/nacos-server:latest resources: requests: memory: "2Gi" Cpu: "1000m" ports:-containerPort: 8848 name: client-port env:-name: NACOS_REPLICAS value: "3"-name: SERVICE_NAME value: "nacos-headless"-name: POD_NAMESPACE valueFrom: fieldRef : apiVersion: v1 fieldPath: metadata.namespace-name: MYSQL_MASTER_SERVICE_DB_NAME valueFrom: configMapKeyRef: name: nacos-cm key: mysql.master.db.name-name: MYSQL_MASTER_SERVICE_PORT valueFrom: ConfigMapKeyRef: name: nacos-cm key: mysql.master.port-name: MYSQL_SLAVE_SERVICE_PORT valueFrom: configMapKeyRef: name: nacos-cm key: mysql.slave.port-name: MYSQL_MASTER_SERVICE_USER ValueFrom: configMapKeyRef: name: nacos-cm key: mysql.master.user-name: MYSQL_MASTER_SERVICE_PASSWORD valueFrom: configMapKeyRef: name: nacos-cm key: mysql.master.password-name: NACOS_SERVER_PORT Value: "8848"-name: PREFER_HOST_MODE value: "hostname" readinessProbe: httpGet: port: client-port path: / nacos/v1/console/health/readiness initialDelaySeconds: 60 timeoutSeconds: 3 livenessProbe: httpGet: Port: client-port path: / nacos/v1/console/health/liveness initialDelaySeconds: 60 timeoutSeconds: 3 volumeMounts:-name: plugindir mountPath: / home/nacos/plugins/peer-finder-name: datadir mountPath: / home/nacos/data-name: logdir mountPath: / Home/nacos/logs volumeClaimTemplates:-metadata: name: plugindir annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: ["ReadWriteMany"] resources: requests: storage: 5Gi-metadata: name: datadir annotations: volume.beta.kubernetes.io/ Storage-class: "managed-nfs-storage" spec: accessModes: ["ReadWriteMany"] resources: requests: storage: 5Gi-metadata: name: logdir annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: ["ReadWriteMany"] resources: Requests: storage: 5Gi selector: matchLabels: app: nacos

View the result

[root@localhost nacos] # kubectl get pod NAME READY STATUS RESTARTS AGEmysql-master-hnnzq 1/1 Running 0 43hmysql-slave-jjq98 1/1 Running 0 43hnacos-0 1/1 Running 0 41hnacos-1 1/1 Running 0 41hnacos-2 1/1 Running 0 41hnfs-client-provisioner-57c8c85896-cpxtx 1 / 1 Running 0 45h [root@localhost nacos] # kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 172.21.0.1 443/TCP 9dmysql-master ClusterIP 172.21.12.11 3306/TCP 43hmysql-slave ClusterIP 172.21.1.9 3306/TCP 43hnacos-headless ClusterIP 172.21.11.220 8848/TCP 41hnginx-svc ClusterIP 172.21.1.104 10080/TCP 8d [root@localhost nacos] # kubectl get storageclassNAME PROVISIONER AGEalicloud-disk-available alicloud/disk 9dalicloud-disk-efficiency alicloud/disk 9dalicloud-disk-essd alicloud/disk 9dalicloud-disk-ssd alicloud/disk 9dmanaged-nfs-storage fuseim.pri/ifs 45h [root@localhost nacos] # kubectl get pv PvcNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpersistentvolume/pvc-c920f9cf-f56f-11e9-90dc-da6119823c38 5Gi RWX Delete Bound default/datadir-nacos-0 managed-nfs-storage 43hpersistentvolume/pvc-c921977d-f56f-11e9-90dc- Da6119823c38 5Gi RWX Delete Bound default/logdir-nacos-0 managed-nfs-storage 43hpersistentvolume/pvc-c922401f-f56f-11e9-90dc-da6119823c38 5Gi RWX Delete Bound default/plugindir-nacos-0 managed-nfs-storage 43hpersistentvolume/pvc-db3ccda6-f56f-11e9-90dc-da6119823c38 5Gi RWX Delete Bound Default/datadir-nacos-1 managed-nfs-storage 43hpersistentvolume/pvc-db3dc25a-f56f-11e9-90dc-da6119823c38 5Gi RWX Delete Bound default/logdir-nacos-1 managed-nfs-storage 43hpersistentvolume/pvc-db3eb86c-f56f-11e9-90dc-da6119823c38 5Gi RWX Delete Bound default/plugindir-nacos-1 managed-nfs-storage 43hpersistentvolume/pvc-fa47ae6e-f57a-11e9-90dc-da6119823c38 5Gi RWX Delete Bound default/logdir-nacos-2 managed-nfs-storage 41hpersistentvolume/pvc-fa489723-f57a-11e9-90dc-da6119823c38 5Gi RWX Delete Bound default/plugindir-nacos-2 managed-nfs-storage 41hpersistentvolume/pvc-fa494137-f57a-11e9-90dc-da6119823c38 5Gi RWX Delete Bound default/datadir-nacos-2 managed-nfs-storage 41hNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpersistentvolumeclaim/datadir-nacos-0 Bound pvc-c920f9cf-f56f-11e9-90dc-da6119823c38 5Gi RWX managed-nfs- Storage 43hpersistentvolumeclaim/datadir-nacos-1 Bound pvc-db3ccda6-f56f-11e9-90dc-da6119823c38 5Gi RWX managed-nfs-storage 43hpersistentvolumeclaim/datadir-nacos-2 Bound pvc-fa494137-f57a-11e9-90dc-da6119823c38 5Gi RWX managed-nfs-storage 41hpersistentvolumeclaim/logdir-nacos-0 Bound pvc-c921977d-f56f-11e9-90dc-da6119823c38 5Gi RWX managed-nfs-storage 43hpersistentvolumeclaim / logdir-nacos-1 Bound pvc-db3dc25a-f56f-11e9-90dc-da6119823c38 5Gi RWX managed-nfs-storage 43hpersistentvolumeclaim/logdir-nacos-2 Bound pvc-fa47ae6e-f57a-11e9-90dc-da6119823c38 5Gi RWX managed-nfs-storage 41hpersistentvolumeclaim/plugindir-nacos-0 Bound pvc-c922401f-f56f-11e9-90dc-da6119823c38 5Gi RWX managed-nfs-storage 43hpersistentvolumeclaim/plugindir-nacos -1 Bound pvc-db3eb86c-f56f-11e9-90dc-da6119823c38 5Gi RWX managed-nfs-storage 43hpersistentvolumeclaim/plugindir-nacos-2 Bound pvc-fa489723-f57a-11e9-90dc-da6119823c38 5Gi RWX managed-nfs-storage 41h [root@localhost nacos] #

Then you can access port 8848 by mapping it out with ingress.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report