In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
I. Environmental description
1. Server information
172.21.184.43 kafka 、 zk172.21.184.44 kafka 、 zk172.21.184.45 kafka 、 zk172.21.244.7 ansible
2. Software version information
System: CentOS Linux release 7.5.1804 (Core) kafka:kafka_2.11-2.2.0Zookeeper version: 3.4.8ansible:ansible 2.7.10
Second, configuration preparation
1. Write playbook-related configuration files and first tree to take a look at the whole directory structure
Tree. ├── kafka │ ├── group_vars │ │ └── kafka │ ├── hosts │ ├── kafkainstall.yml │ └── templates │ ├── server.properties-1.j2 │ ├── server.properties-2.j2 │ ├── server.properties-3.j2 │ server.properties.j2 zookeeper group_vars │ └── zook ├── hosts ├── templates │ └── zoo.cfg.j2 └── zooKeeperinstall.yml
2. Set up relevant catalogs
Mkdir / chj/ansibleplaybook/kafka/group_vars-pmkdir / chj/ansibleplaybook/kafka/templatesmkdir / chj/ansibleplaybook/zookeeper/group_vars-pmkdir / chj/ansibleplaybook/zookeeper/templates
3. Write the configuration file for deploying zookeeper
A, group_vars file of zookeeper
Vim / chj/ansibleplaybook/zookeeper/group_vars/zook-zk01server: 172.21.184.43zk02server: 172.21.184.44zk03server: 172.21.184.45zookeeper_group: workzookeeper_user: workzookeeper_dir: / chj/data/zookeeperzookeeper_appdir: / chj/app/zookeeperzk01myid: 43zk02myid: 44zk03myid: 45
B, templates file of zookeeper
Vim / chj/ansibleplaybook/zookeeper/templates/zoo.cfg.j2tickTime=2000initLimit=500syncLimit=20dataDir= {{zookeeper_dir}} dataLogDir=/chj/data/log/zookeeper/clientPort=10311maxClientCnxns=1000000server. {{zk01myid}} = {{zk01server}}: 10301:10331server. {{zk02myid}} = {{zk02server}}: 10302:10332server. {{zk03myid}} = {{zk03server}}: 10303 dataLogDir=/chj/data/log/zookeeper/clientPort=10311maxClientCnxns=1000000server 10333
Host file of C and zookeeper
Vim / chj/ansibleplaybook/zookeeper/ hosts[zook] 172.21.184.43172.21.184.44172.21.184.45
D, yml file for installation of zookeeper
Vim / chj/ansibleplaybook/zookeeper/zooKeeperinstall.yml---- hosts: "zook" gather_facts: no tasks:-name: Create zookeeper group group: name:'{zookeeper_group}} 'state: present tags:-zookeeper_user-name: Create zookeeper user user: name:' {zookeeper_user}} 'group:' {zookeeper_group}}' State: present createhome: no tags:-zookeeper_group-name: check whether zk stat: path: / chj/app/zookeeper register: node_files-debug: msg: "{{node_files.stat.exists}}"-name: check for the existence of java environment shell: if [!-f "/ usr/local/jdk/bin/java"] Then echo "create directory"; curl-o / usr/local/jdk1.8.0_121.tar.gz http://download.pkg.chj.cloud/chj_jdk1.8.0_121.tar.gz; tar xf / usr/local/jdk1.8.0_121.tar.gz-C / usr/local/jdk1.8.0_121; cd / usr/local/; mv / usr/local/jdk1.8.0_121 jdk; ln-s / usr/local/jdk/bin/java / sbin/java Else echo "directory already exists\ n" Fi-name: download and decompress chj_zookeeper unarchive: src= http://ops.chehejia.com:9090/pkg/zookeeper.tar.gz dest=/chj/app/ copy=no when: node_files.stat.exists = = False register: unarchive_msg-debug: msg: "{{unarchive_msg}}"-name: create zookeeper data directory and log directory shell: if [!-d "/ chj/data / zookeeper "] & & [!-d" / chj/data/log/zookeeper "] Then echo "create directory"; mkdir-p / chj/data/ {zookeeper,zookeeperLog}; else echo "directory already exists\ n" Fi-name: modify directory permissions shell: chown work:work-R / chj/ {data App} when: node_files.stat.exists = = False-name: configure zk myid shell: "hostname-I | cut-d'.'- f 4 | awk'{print $1}'> / chj/data/zookeeper/myid"-name: Config zookeeper service template: src: zoo.cfg.j2 dest: / chj/app/zookeeper/conf/zoo.cfg mode: 0755-name: Reload systemd command: Systemctl daemon-reload-name: Restart ZooKeeper service shell: sudo su-work-c "/ chj/app/zookeeper/console start"-name: Status ZooKeeper service shell: "sudo su-work-c'/ chj/app/zookeeper/console status'" register: zookeeper_status_result ignore_errors: True-debug: msg: "{zookeeper_status_result}}"
4. Write the configuration file for deploying kafka
A, group_vars file of kafka
Vim / chj/ansibleplaybook/kafka/group_vars/kafka---kafka01: 172.21.184.43kafka02: 172.21.184.44kafka03: 172.21.184.45kafka_group: workkafka_user: worklog_dir: / chj/data/kafkabrokerid1: 1brokerid2: 2brokerid3: 3zk_addr: 172.21.184.43:10311172.21.184.44:10311172.21.184.45:10311/kafka
B, templates file of kafka
Vim / chj/ansibleplaybook/kafka/templates/server.properties-1.j2broker.id= {{brokerid1}} # server.properties-2.j2 and server.properties-3.j2 are configured as brokerid2 and brokerid3auto.create.topics.enable=falseauto.leader.rebalance.enable=truebroker.rack=/default-rackcompression.type=snappycontrolled.shutdown.enable=truecontrolled.shutdown.max.retries=3controlled.shutdown.retry.backoff.ms=5000controller.message.queue.size=10controller.socket.timeout.ms=30000default.replication.factor=1delete.topic.enable=truefetch.message.max.bytes=10485760fetch.purgatory.purge, respectively. Interval.requests=10000leader.imbalance.check.interval.seconds=300leader.imbalance.per.broker.percentage=10host.name= {{kafka01}} listeners=PLAINTEXT:// {{kafka01}}: 9092 # # server.properties-2.j2 and server.properties-3.j2 are configured as brokerid2 and brokerid3log.cleanup.interval.mins=1200log.dirs= {{log_dir}} log.index.interval.bytes=4096log.index.size.max.bytes=10485760log.retention.bytes=-1log.retention.hours=168log.roll.hours=168log.segment.bytes=1073741824message.max.bytes=10000000min.insync.replicas=1num. Io.threads=8num.network.threads=3num.partitions=1num.recovery.threads.per.data.dir=1num.replica.fetchers=1offset.metadata.max.bytes=4096offsets.commit.required.acks=-1offsets.commit.timeout.ms=5000offsets.load.buffer.size=5242880offsets.retention.check.interval.ms=600000offsets.retention.minutes=86400000offsets.topic.compression.codec=0offsets.topic.num.partitions=50offsets.topic.replication.factor=3transaction.state.log.replication.factor=3transaction.state.log.min.isr=1offsets.topic.segment.bytes=104857600port=9092producer.purgatory.purge.interval.requests=10000queued.max.requests=500replica.fetch.max.bytes=10485760replica. Fetch.min.bytes=1replica.fetch.wait.max.ms=500replica.high.watermark.checkpoint.interval.ms=5000replica.lag.max.messages=4000replica.lag.time.max.ms=10000replica.socket.receive.buffer.bytes=65536replica.socket.timeout.ms=30000sasl.enabled.mechanisms=GSSAPIsasl.mechanism.inter.broker.protocol=GSSAPIsocket.receive.buffer.bytes=102400socket.request.max.bytes=104857600socket.send.buffer.bytes=102400zookeeper.connect= {{zk_addr}} zookeeper.connection.timeout.ms=25000zookeeper.session.timeout.ms=30000zookeeper.sync.time.ms=2000group.initial.rebalance.delay.ms=10000
Host file of C and kafka
Vim / chj/ansibleplaybook/kafka/ hosts[kafka] 172.21.184.43172.21.184.44172.21.184.45
D, yml file for installation of kafka
Vim / chj/ansibleplaybook/kafka/kafkainstall.yml---- hosts: "kafka" gather_facts: yes tasks:-name: obtain eth0 ipv4 address debug: msg= {{ansible_default_ipv4.address}} when: ansible_default_ipv4.alias = = "eth0"-name: Create kafka group group: name:'{kafka_group}} 'state: present tags:-kafka_user- Name: Create kafka user user: name:'{kafka_user}} 'group:' {{kafka_group}} 'state: present createhome: no tags:-kafka_group-name: check whether zk stat: path: / chj/app/kafka register: node_files-debug: msg: "{{node_ Files.stat.exists}} "- name: check for the existence of a java environment shell: if [!-f" / usr/local/jdk/bin/java "] Then echo "create directory"; curl-o / usr/local/jdk1.8.0_121.tar.gz http://download.pkg.chj.cloud/chj_jdk1.8.0_121.tar.gz; tar xf / usr/local/jdk1.8.0_121.tar.gz-C / usr/local/jdk1.8.0_121; cd / usr/local/; mv / usr/local/jdk1.8.0_121 jdk; ln-s / usr/local/jdk/bin/java / sbin/java Else echo "directory already exists\ n" Fi-name: download and decompress kafka unarchive: src= http://ops.chehejia.com:9090/pkg/kafka.tar.gz dest=/chj/app/ copy=no when: node_files.stat.exists = = False register: unarchive_msg-debug: msg: "{{unarchive_msg}}"-name: create kafka data directory and log directory shell: if [!-d "/ chj/data/kafka "] & [!-d" / chj/data/log/kafka "] Then echo "create directory"; mkdir-p / chj/data/ {kafka,log/kafka}; else echo "directory already exists\ n" Fi-name: modify directory permissions shell: chown work:work-R / chj/ {data App} when: node_files.stat.exists = = False-name: Config kafka01 service template: src: server.properties-1.j2 dest: / chj/app/kafka/config/server.properties mode: 0755 when: ansible_default_ipv4.address = = "172.21.184.43"-name: Config kafka02 service template: src: server.properties-2.j2 dest: / chj / app/kafka/config/server.properties mode: 0755 when: ansible_default_ipv4.address = = "172.21.184.44"-name: Config kafka03 service template: src: server.properties-3.j2 dest: / chj/app/kafka/config/server.properties mode: 0755 when: ansible_default_ipv4.address = = "172.21.184.45"-name: Reload systemd command: systemctl Daemon-reload-name: Restart kafka service shell: sudo su-work-c "/ chj/app/kafka/console start"-name: Status kafka service shell: "sudo su-work-c'/ chj/app/kafka/console status'" register: kafka_status_result ignore_errors: True-debug: msg: "{kafka_status_result}}"
PS: install the binary packages of jdk, kafka, and zk that you need, and replace them with the download address that you can access.
III. Deployment
1. Deploy the zookeeper cluster first
Cd / chj/ansibleplaybook/zookeeper/ansible-playbook-I hosts zooKeeperinstall.yml-b
2. Deploy kafka cluster
Cd / chj/ansibleplaybook/kafka/ansible-playbook-I hosts kafkainstall.yml-b
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.