Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build kafka Environment for win10

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly introduces "how to build kafka environment in win10". In daily operation, I believe many people have doubts about how to build kafka environment in win10. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts about "how to build kafka environment in win10". Next, please follow the editor to study!

Description

This blog is updated every Friday.

This blog post mainly introduces the process of installing kafka in win10, official script instructions and data generation, testing, with strong practicality.

Operation environment

Jdk 1.8

Kafka 2.4.1

Scala 2.12

Build steps to download the installation package

Download the official version of the kafka address built-in zk

Download address of kafka2.4.1 scala 2.12

Configuration installation configuration

Create a folder

Create a new data (to store snapshots) and kafka-logs (the storage folder for logs) under the kafka root directory

Modify the configuration: go to the config directory

Modify the log.dirs path in service.properties without log.dirs=D:\ test\ kafka_2.12-2.4.1\ kafka-logs (Note: the folder separator must be "\")

Modify the dataDir path in zookeeper.properties to dataDir=D:\ test\ kafka_2.12-2.4.1\ data

Server.properties description

Log.dirs: specify several file directory paths that Broker needs to use. There are no default values and must be specified. In a production environment, you must configure multiple paths for log.dirs, and if conditions permit, you need to ensure that directories are mounted to different physical disks. The advantage is that the read and write performance is improved, and multiple physical disks can read and write data at the same time with higher throughput; it can achieve Failover. Kafka version 1.1 introduces Failover function, and the data on the broken disk will be automatically transferred to other normal disks, and Broker can also work properly. Based on the Failover mechanism, Kafka can abandon the RAID scheme.

The parameter in zookeeper.connect:CS format can be specified as zk1:2181,zk2:2181,zk3:2181, and can be specified by different Kafka clusters: zk1:2181,zk2:2181,zk3:2181/kafka1,chroot only needs to be written once.

Listeners: sets the listener for the private network to access the Kafka service.

Advertised.listeners: sets the listener for the public network to access the Kafka service.

Auto.create.topics.enable: whether to allow automatic creation of Topic.

Unclean.leader.election.enable: whether to allow Unclean Leader elections.

Auto.leader.rebalance.enable: whether to allow periodic Leader elections is recommended to be set to false in the production environment.

Log.retention. {hours | minutes | ms}: controls how long a message's data is saved. Priority: ms setting is the highest, minutes is the second, and hours is the lowest.

Log.retention.bytes: specifies the total disk capacity saved by Broker for messages. Message.max.bytes: controls the maximum message size that Broker can receive.

Start

Windows executes scripts under the bin directory

Start zookeeper

The kafka root directory executes.\ bin\ windows\ zookeeper-server-start.bat.\ config\ zookeeper.properties, do not close the page after startup.

Start kafka

The kafka root directory executes.\ bin\ windows\ kafka-server-start.bat.\ config\ server.properties, do not shut down after startup.

Kafka script

The windows system bat script corresponds to the sh script.

Script description

Connect-standalone.sh is used to start the Kafka Connect component of Standalone mode for a single node.

Connect-distributed.sh is used to start the Kafka Connect component of multi-node Distributed mode.

The kafka-acls.sh script is used to set Kafka permissions, such as which users can access which TOPIC of Kafka.

Kafka-delegation-tokens.sh is used to manage Delegation Token. Delegation Token-based authentication is a lightweight authentication mechanism, which is a supplement to SASL authentication mechanism.

Kafka-topics.sh is used to manage all TOPIC.

Kafka-console-producer.sh is used to produce messages.

Kafka-console-consumer.sh is used to consume messages.

Kafka-producer-perf-test.sh is used for producer performance testing.

Kafka-consumer-perf-test.sh is used for consumer performance testing.

Kafka-delete-records.sh is used to delete partition messages of Kafka. Because Kafka has its own automatic message deletion policy, the utilization rate is not high.

Kafka-dump-log.sh is used to view the contents of Kafka message files, including various metadata information and message body data of messages.

Kafka-log-dirs.sh is used to query the disk usage of each log path on each Broker.

Kafka-mirror-maker.sh is used to mirror data between Kafka clusters.

Kafka-preferred-replica-election.sh is used to perform Preferred Leader elections, and can perform the operation of replacing Leader for a specified topic.

Kafka-reassign-partitions.sh is used to perform partition replica migration and copy file path migration.

Kafka-run-class.sh is used to execute any Kafka class with a main method.

Kafka-server-start.sh is used to start the Broker process.

Kafka-server-stop.sh is used to stop the Broker process.

Kafka-streams-application-reset.sh is used to reset the displacement of Kafka Streams applications in order to re-consume data.

Kafka-verifiable-producer.sh is used to test and verify the functionality of the producer.

Kafka-verifiable-consumer.sh is used to test and verify consumer functionality.

Trogdor.sh is the testing framework for Kafka and is used to perform various benchmarking and load testing.

Kafka-broker-api-versions.sh scripts are mainly used to verify server and client suitability between different Kafka versions.

Script usage

View all topic:.\ bin\ windows\ kafka-topics.bat-- zookeeper zookeeper_host:port-- list

Create a topic named test to set up three copies and one partition:.\ bin\ windows\ kafka-topics.bat-- zookeeper zookeeper_host:port-- create-- replication-factor 3-- partitions 1-- topic test

Delete topic:.\ bin\ windows\ kafka-topics.bat named test-- zookeeper zookeeper_host:port-- delete-- topic test

View topic information:.\ bin\ windows\ kafka-topics.bat-- zookeeper zookeeper_host:port-- describe-- topic test

Modify the number of topic partitions: kafka-topics.sh-- bootstrap-server zookeeper_host:port-- alter-- topic test-- number of new partitions partitions

Topic speed limit

-- the entity-name parameter is used to specify Broker ID. If the copy of the TOPIC is on multiple Broker, it needs to be executed for the corresponding Broker in turn.

When a topic replica is performing a replica synchronization mechanism, in order not to consume too much bandwidth, you can set the bandwidth used by Leader replicas and Follower replicas, which should not exceed 100MBps (104857600). First set the parameters leader.replication.throttled.rate and follower.replication.throttled.rate on the Broker side. The command is as follows: kafka-configs.sh-- zookeeper zookeeper_host:port-- alter-- add-config 'leader.replication.throttled.rate=104857600,follower.replication.throttled.rate=104857600'-- entity-type brokers-- entity-name 0

Set a speed limit for all copies of TOPIC, which can be expressed uniformly using the wildcard *. The command is as follows: kafka-configs.sh-- zookeeper zookeeper_host:port-- alter-- add-config 'leader.replication.throttled.replicas=*,follower.replication.throttled.replicas=*'-- entity-type topics-- entity-name test

Kafka functional Test Port description

2181 zk communication port, operating topic use

9092 dataports for data producers and consumers

Create a topic theme

Create a topic theme named test, execute.\ bin\ windows\ kafka-topics.bat-- zookeeper localhost:2181-- create-- replication-factor 1-- partitions 1-- topic test in the kafka root directory, and keep the window when you are finished.

Create a producer

Create a window to execute.\ bin\ windows\ kafka-console-producer.bat-- broker-list localhost:9092-- topic test, enter a message, and each enter is a message.

Create consumers

Execute.\ bin\ windows\ kafka-console-consumer.bat-- bootstrap-server localhost:9092-- topic test-- from-beginning under the root directory of the kafka, and the window will output the message you just entered at the producer.

You can enter content in the producer window, and the consumer window will continue to output the results.

Kafka version compatibility

Prior to Kafka version 0.10.2.0, Kafka was one-way compatible, and a higher version of Broker could handle requests sent by a lower version of Client, while a lower version of Broker could not handle a higher version of Client requests. Since version 0.10.2.0 of Kafka, Kafka officially supports two-way compatibility, and a lower version of Broker can also handle requests from a higher version of Client.

Summary

There is no end to learning, the same thing, every time there can be new gains, technology can not be used on the line, to understand the principle, explore what is, why, how to better, only continuous improvement can gain more knowledge, learning is not for a certain result, but to maintain a certain state.

At this point, the study on "how to build a kafka environment for win10" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report