In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
1.4) zookeeper deployment
1.4.1) zookeeper deployment
Kubectl apply-f zookeeper.yaml
1.4.2) zookeeper deployment process check
[root@node7 ~] # kubectl describe pods zookeeper-
Zookeeper-0 zookeeper-1 zookeeper-2
Kubectl get pods-w-l app=zookeeper
1.5) zookeeper cluster verification
1.5.1) check the Pods distribution in zookeeper StatefulSet
[root@node7] # kubectl get pods-o wide
1.5.2) check the Pods hostname in zookeeper StatefulSet
For i in 0 12; do kubectl exec zookeeper-$i-hostname; done
1.5.3) check the myid identity in zookeeper StatefulSet
For i in 0 12; do echo "myid zookeeper-$i"; kubectl exec zookeeper-$i-- cat / app/zookeeper/data/myid; done
1.5.4) check FQDN (official domain name) in zookeeper StatefulSet
For i in 0 12; do kubectl exec zookeeper-$i-- hostname-f; done
1.5.5) check dns resolution in zookeeper StatefulSet pods
For i in 0 12; do kubectl exec-ti busybox-- nslookup zookeeper-$i.zookeeper-headless.default.svc.cluster.local; done
1.5.6) check zookeeper StatefulSet zoo.cfg profile standardization
1.5.7) check zookeeper StatefulSet cluster status
For i in 0 12; do kubectl exec zookeeper-$i-- / app/zookeeper/bin/zkServer.sh status | grep Mode; done
1.5.8) check zookeeper StatefulSet four-word command check
1.6) zookeeper cluster expansion
II. Kafka cluster deployment
2.1) list of kafka files
2.2) detailed explanation of kafka file list
2.2.1) oracle jdk software package
jdk-8u151-linux-x64.tar.gz
The underlying layer uses the centos6.6 image, deploy the directory to the app directory, and configure the environment variables in dockerfile.
2.2.2) zookeeper software package
kafka_2.12-2.2.0.tar.gz
The underlying layer uses centos6.6 image to deploy the directory to the app directory
2.2.3) kafka Dockerfile
# set the information of inheriting the author of the image FROM centos:6.6# MAINTAINER docker_user (renzhiyuan@docker.com) # kafka and jdk standardized version ENV JAVA_VERSION= "1.8.0q151" ENV KAFKA_VERSION= "2.2.0" ENV KAFKA_JDK_HOME=/appENV JAVA_HOME=/app/jdk1.8.0_151ENV KAFKA_HOME=/app/kafkaENV LANG=en_US.utf8# Base using package installation configuration # RUN yum makecacheRUN yum install lsof yum-utils lrzsz net-tools Nc-y & > / dev/null# create installation directory RUN mkdir $KAFKA_JDK_HOME# permissions and variables RUN chown-R root.root $KAFKA_JDK_HOME & & chmod-R 755 $KAFKA_JDK_HOME# installation configuration JDK ADD jdk-8u151-linux-x64.tar.gz / appRUN echo "export JAVA_HOME=/app/jdk1.8.0_151" > > / etc/profileRUN echo "export PATH=\ $JAVA_HOME/bin:\ $PATH" > > / etc/profileRUN echo "export CLASSPATH=.:\ $JAVA_HOME/lib/dt.jar:\ $JAVA_HOME/lib/tools.jar "> > / etc/profile# installation configuration Kafka and create directory ADD kafka_2.12-2.2.0.tar.gz / appRUN ln-s / app/kafka_2.12-2.2.0 / app/kafka# configuration file Log cutting Jvm Standardization is configured separately in kafkaGenConfig.sh # Open Port EXPOSE 9092 99992.2.4) kafka.yaml# deploys Service Headless for mutual communication between Kafka apiVersion: v1kind: Servicemetadata: name: app: kafkaspec: type: ClusterIP clusterIP: None ports:-name: kafka port: 9092 targetPort: kafka selector: app: kafka---# deployment Service For external access to kafkaapiVersion: v1kind: Servicemetadata: name: kafka labels: app: kafkaspec: NodePort ports:-name: kafka port: 9092 targetPort: 9092 nodePort: 32192 protocol: TCP selector: app: kafka---# configuration controller to ensure that the POD cluster is running the minimum number of apiVersion: policy/v1beta1kind: PodDisruptionBudgetmetadata: name: kafka-pdbspec: selector: matchLabels: app: kafka minAvailable: 2murf # configuration StatefulSetapiVersion: apps/v1beta2kind: StatefulSetmetadata: name: kafkaspec: podManagementPolicy: OrderedReady replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app: kafka serviceName: kafka-headless template: metadata: annotations: labels: app: kafkaspec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution:-labelSelector: matchExpressions:-key "app" operator: In values:-kafka topologyKey: "kubernetes.io/hostname" containers:-name: kafka imagePullPolicy: Always image: 192.168.8.183/library/kafka-zyxf:2.2.0 resources: requests: Memory: "500m" cpu: "256m" ports:-containerPort: 9092 name: kafka env:-name: KAFKA_HEAP_OPTS value: "- Xmx256M-Xms256M" command:-sh-- c-"/ app/kafka/bin/kafka-server-start.sh / app/kafka/config/server. Properties\-override broker.id=$ {HOSTNAME##*-}\-override zookeeper.connect=zookeeper:2181\-override listeners=PLAINTEXT://:9092\-override advertised.listeners=PLAINTEXT://:9092\-override broker.id.generation.enable=false\-override auto.create.topics.enable=false\-override min.insync.replicas=2\ -- override log.dir=\-- override log.dirs=/app/kafka/kafka-logs\-- override offsets.retention.minutes=10080\-override default.replication.factor=3\-- override queued.max.requests=2000\-- override num.network.threads=8\-- override num.io.threads=16\-- override auto.create.topics.enable=false\ -- override socket.send.buffer.bytes=1048576\-- override socket.receive.buffer.bytes=1048576\-- override num.replica.fetchers=4\-- override replica.fetch.max.bytes=5242880\-- override replica.socket.receive.buffer.bytes=1048576 "volumeMounts:-name: datadir mountPath: / app/kafka/kafka-logs volumes:-name: datadir hostPath : path: / kafka type: DirectoryOrCreate# emptyDir: {} # volumeMounts:#-name: data# mountPath: / renzhiyuan/kafka# volumeClaimTemplates:#-metadata:# name: data# spec:# accessModes: ["ReadWriteOnce"] # storageClassName: local-storage# resources:# requests:# storage: 3Gi
2.3) kafka image generation and upload
2.3.1) zookeeper image packaging
Docker build-t kafka:2.2.0-f kafka.Dockerfile.
Docker tag kafka:2.2.0 192.168.8.183/library/kafka-zyxf:2.2.0
2.3.2) upload zookeeper image to harbor repository
Docker login 192.168.8.183-u admin-p renzhiyuan
Docker push 192.168.8.183/library/kafka-zyxf:2.2.0
2.4) kafka deployment
2.4.1) kafka deployment
Kubectl apply-f kafka.yaml
2.4.2) kafka deployment process check
[root@node7 ~] # kubectl describe pods kafka-
Kafka-0 kafka-1 kafka-2
[root@node7 ~] #
Kubectl get pods-w-l app=kafka
2.5) kafka cluster verification
2.5.1) check the Pods distribution in kafka StatefulSet
2.5.2) check the Pods hostname in kafka StatefulSet
For i in 0 12; do kubectl exec kafka-$i-- hostname; done
2.5.3) check FQDN (official domain name) in kafka StatefulSet
For i in 0 12; do kubectl exec kafka-$i-- hostname-f; done
2.5.4) check dns parsing in kafka StatefulSet pods
For i in 0 12; do kubectl exec-ti busybox-- nslookup kafka-$i.kafka-headless.default.svc.cluster.local; done
2.5.5) check the standardization of kafka StatefulSet server.properties configuration files
For i in 0 12; do echo kafka-$i; kubectl exec kafka-$i cat / app/kafka/logs/server.log | grep "auto.create.topics.enable = false"; done
2.5.6) check kafka StatefulSet cluster verification
creates a topic
Kubectl exec kafka-1-/ app/kafka/bin/kafka-topics.sh-create-zookeeper zookeeper:2181-replication-factor 3-partitions 6-topic renzhiyuan
checks topic information
Kubectl exec kafka-1-/ app/kafka/bin/kafka-topics.sh-- describe-- zookeeper zookeeper:2181-- topic renzhiyuan
Production message
/ app/kafka/bin/kafka-console-producer.sh-broker-list kafka:9092-topic renzhiyuan
consumption message
/ app/kafka/bin/kafka-console-consumer.sh-bootstarp-server kafka:9092-from-beginning-topic renzhiyuan
2.6) expansion of kafka cluster
2.3.1) kafka is expanded to 6 instances
Kubectl scale-replicas=6 StatefulSet/kafka
Statefulset.apps/kafka scaled
[root@node7 ~] #
2.3.2) check of kafka expansion process
[root@node7 ~] # kubectl describe pods kafka-
Kafka-0 kafka-2 kafka-4
Kafka-1 kafka-3 kafka-5
[root@node7 ~] #
Kubectl get pods-w-l app=kafka
creates a topic
Kubectl exec kafka-3-/ app/kafka/bin/kafka-topics.sh-create-zookeeper zookeeper:2181-replication-factor 3-partitions 6-topic renzhiyuan2
checks topic information
Kubectl exec kafka-3-/ app/kafka/bin/kafka-topics.sh-- describe-- zookeeper zookeeper:2181-- topic renzhiyuan2
III. Kafka manager management deployment
3.1) list of kafka manager files
3.2) detailed explanation of kafka manager file list
3.3) kafka manager image generation and upload
3.3.1) zookeeper image packaging
Docker build-t kafka-manager:1.3.3.18-f manager.Dockerfile.
Docker tag kafka-manager:1.3.3.18 192.168.8.183/library/ kafka-manager-zyxf: 1.3.3.18
3.3.2) upload zookeeper image to harbor repository
Docker login 192.168.8.183-u admin-p renzhiyuan
Docker push 192.168.8.183/library/kafka-manager-zyxf
3.4) kafka manager deployment
[root@node7] # kubectl apply-f manager.yaml
3. 5) kafka manager verification
Http://192.168.8.181:32009/
3.5.1) pre-expansion verification
3.5.2) Verification after expansion
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.