In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
The pits encountered during the deployment of the cluster:
1 node (s) had taints that the pod didn't tolerate.
A node in the cluster is contaminated. Check the node information and find that one of the nodes is not up, resulting in only 2 pod starts.
2 pits on PV and PVC
You need to create three zookeeper in a zk.yaml. According to the characteristics of the zookeeper cluster, three zookeeper need three different data directories (the same is true if you add log directories) before the cluster will become normal.
When you create a PV, you need to create three different PV
In a zk.yaml, PVC needs to be associated with three different backend PVC. If you traditionally create a PVC and specify the name of the PVC in the zk.yaml, you can only specify one name, so it is impossible to associate to three different PV, so you need volumeClaimTemplates (PVC template), and the template accessModes requires ReadWriteOnce,ReadWriteMany to fail the association.
VolumeClaimTemplates:-metadata: name: datadir spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi
If you are using a dynamic pvc feed, you also need to create a PVC template
3 zookeeper is provided for external access
Because zookeeper is deployed using statefulSet, it uses headless services. What if it is provided for external access? The way to do this is to add a service specifically for external access.
Specific deployment
1 create and install nfs on 192.168.0.11 and configure startup nfs
2 download nfs on each node node
3 create the PV used by zk in the matser node
ApiVersion: v1kind: PersistentVolumemetadata: name: zk-data1spec: capacity: storage: 1Gi accessModes:-ReadWriteOnce nfs: server: 192.168.0.11 path: / data/zk/data1---apiVersion: v1kind: PersistentVolumemetadata: name: zk-data2spec: capacity: storage: 1Gi accessModes:-ReadWriteOnce nfs: server: 192.168.0.11 path: / data/zk/data2---apiVersion: v1kind: PersistentVolumemetadata: name: zk-data3spec: Capacity: storage: 1Gi accessModes:-ReadWriteOnce nfs: server: 192.168.0.11 path: / data/zk/data3
4 create the yaml file of the zookeeper that creates the pod on the master node
Refer to the official document: https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/
The image address in the official document is a foreign address. I found an available address on the Internet, just like the official image.
[root@localhost zk] # cat zk.yaml apiVersion: v1kind: Servicemetadata: name: zk-hs labels: app: ports:-port: 2888 name: server-port: 3888 name: leader-election clusterIP: None selector: app: zk---apiVersion: v1kind: Servicemetadata: name: zk-cs labels: app: zkspec: type: NodePort ports:-port: 2181 targetPort: 2181 name: client nodePort: 2181 selector: app: zk ApiVersion: policy/v1beta1kind: PodDisruptionBudgetmetadata: name: zk-pdbspec: selector: matchLabels: app: zk maxUnavailable: 1---apiVersion: apps/v1kind: StatefulSetmetadata: name: zokspec: serviceName: zk-hs replicas: 3 selector: matchLabels: app: zk template: metadata: labels: app: zk spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution:-labelSelector: MatchExpressions:-key: "app" operator: In values:-zk topologyKey: "kubernetes.io/hostname" containers:-name: kubernetes-zookeeper imagePullPolicy: Always image: leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10 resources : requests: memory: "1Gi" cpu: "0.5" ports:-containerPort: 2181 name: client-containerPort: 2888 name: server-containerPort: 3888 name: leader-election command:-sh-- c-"start-zookeeper\ -- servers=3\-- data_dir=/var/lib/zookeeper/data\-- data_log_dir=/var/lib/zookeeper/data/log\-- conf_dir=/opt/zookeeper/conf\-- client_port=2181\-- election_port=3888\-- server_port=2888\-- tick_time=2000\-init_limit=10\ -- sync_limit=5\-- heap=512M\-- max_client_cnxns=60\-- snap_retain_count=3\-- purge_interval=12\-- max_session_timeout=40000\-- min_session_timeout=4000\-log_level=INFO "readinessProbe: exec: command: -sh-- c-"zookeeper-ready 2181" initialDelaySeconds: 10 timeoutSeconds: 5 livenessProbe: exec: command:-sh-- c-"zookeeper-ready 2181" initialDelaySeconds: 10 timeoutSeconds: 5 volumeMounts:-name : datadir mountPath: / var/lib/zookeeper volumeClaimTemplates:-metadata: name: datadir spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi
5 View cluster status
For i in 0 1 2; do kubectl exec zok-$i zkServer.sh status; done
6 external access to the zookeeper cluster (using local computer access)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.