Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the Apache Kafka framework like?

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

What is the Apache Kafka framework like? I believe many inexperienced people are at a loss about it. Therefore, this article summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.

Apache Kafka framework

Here is a brief description of Kafka.

About Kafka

Kafka is a distributed message framework for processing data streams under Apache. It has the characteristics of horizontal expansion, fault tolerance, high efficiency and so on.

Build a channel for real-time data transmission between systems.

Build real-time applications for popular data transformation or response.

The overall structure of Kafka is similar to that of RabbitMQ, in which message producers send messages to Kafka servers, Kafak receives messages, and then delivers them to consumers. In Kafka, producer messages are sent to Topic. Topic stores all kinds of data, and each piece of data is saved using keys and values. Each Topic contains one or more physical partitions (Partition) that maintain the content and index of the message, which may be stored on different servers. For the client, it doesn't matter how the data is saved, just which Topic the message is sent to.

Run the Kafka server

Kafka depends on ZooKeeper, so start ZooKeeper before starting the Kafka server. The ZooKeeper version used in this chapter is 3.4.8 Magi Kafka version 2.11. Download the compressed packages of the two frameworks and extract them to get the zookeeper-3.4.8 and kafka_2.11-0.11.0.0 directories respectively.

First go to the zookeeper-3.4.8/conf directory, make a copy of the zoo_sample.cfg file, and rename it zoo.cfg. Using the command line tool, go to the zookeeper-3.4.8/bin directory and run the "zkServer" command. If you start normally, port 2181 will be occupied. The command line window does not need to be closed, and then start Kafka.

Using the command line tool, go to the "kafka_2.11-0.11.0.0/bin/windows" directory and run the "kafka-server-start.. / config/server.properties" command to start the Kafka server. If started normally, port 9092 will be occupied. The Kafka here is equivalent to the RabbitMQ server in the previous chapter, and Kafka also provides API for us to write clients. Next, we use Kafka's API to test it in the same way.

Author producer

Create a new Maven project named "kafka-test" and add the following dependencies:

Org.apache.kafka kafka-clients 0.11.0.0 org.slf4j slf4j-log4j12 1.7.9

To create a new producer's running class, see listing 8-3.

Listing 8-3:codes\ 08\ 8.3\ kafka-test\ src\ main\ java\ org\ crazyit\ cloud\ ProducerMain.java

Public class ProducerMain {public static void main (String [] args) throws Exception {/ / configuration information Properties props = new Properties (); props.put ("bootstrap.servers", "localhost:9092"); / / sets the serialization handling class props.put ("key.serializer", "org.apache.kafka.common.serialization.StringSerializer") of the data key / / set the serialization processing class props.put ("value.serializer", "org.apache.kafka.common.serialization.StringSerializer") of the data value; / / create a producer instance Producer producer = new KafkaProducer (props) / / create a new record. The first parameter is Topic name ProducerRecord record = new ProducerRecord ("my-topic", "userName", "Angus"); / / send record producer.send (record); producer.close ();}}

Producer's code is simpler than RabbitMQ's, create attribute instance, directly use configuration instance to create Producer (producer), then create a ProducerRecord (record), and finally send it directly. When you create a record, you specify that the message be delivered to "my-topic" with a key of "userName" and a value of "Angus". After the message is sent, Kafka creates a corresponding Topic on the server. Run listing 8-3 to post the message to the Topic of the Kafka server, and then you can use the command to view the server's Topic.

Use the command line tool to go to the kafka_2.11-0.11.0.0/bin/windows directory, enter the command "kafka-topics-- list-- zookeeper localhost:2181", and see the Topic of the current Kafka server, as shown in figure 8-8.

Figure 8-8 View Topic

If you want to delete the Topic on the server, you can use the "kafka-topics-- delete-- zookeeper localhost:2181-- topic my-topic" command, but by default, executing this command only marks Topic as deleted. If you really want to delete Topic, you need to modify the config/server.properties file and add the "delete.topic.enable=true" configuration.

Write consumers

In this case, the producer and the consumer are in the same project, but different startup classes are used. In the previous section, when you write the producer, you specify that the message is sent to "my-topic", and the consumer subscribes to the Topic to get the message, as detailed in listing 8-4.

Listing 8-4:codes\ 08\ 8.3\ kafka-test\ src\ main\ java\ org\ crazyit\ cloud\ ConsumerMain.java

Public class ConsumerMain {public static void main (String [] args) {/ / configuration information Properties props = new Properties (); props.put ("bootstrap.servers", "localhost:9092"); / / Consumer group props.put ("group.id", "test") must be specified Props.put ("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put ("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); KafkaConsumer consumer = new KafkaConsumer (props); / / subscribe to my-topic 's message consumer.subscribe (Arrays.asList ("my-topic")) / / read records while (true) {ConsumerRecords records = consumer.poll; for (ConsumerRecord record: records) {System.out.println ("key:" + record.key () + ", value:" + record.value ());}

After setting up the configuration information, create a KafkaConsumer instance, subscribe to the "my-topic" message through this instance, and finally use the poll method of KafkaConsumer to get the server message and output it. Listing 8-4 of the shipping code, and then running listing 8-5, you can see that the output is as follows:

Key: userName, value: Angus Consumer Group

When writing consumers, you need to specify the id of the consumer group, and with regard to the consumer group, since this concept is also involved in Spring Cloud Stream, it needs to be specifically explained.

Consumers add a consumer group identity to themselves, and each record posted to Topic is delivered to a consumer instance of the consumer group. If multiple consumer instances have the same consumer group, these records will be assigned to each consumer instance to achieve load balancing. If all consumers have different consumer groups, each record will be broadcast to all consumers for processing. If you can't understand this passage, please see figure 8-9.

Figure 8-9 Consumer Group

As shown in figure 8-9, if consumer An and consumer B belong to the same "consumer group", then when the producer sends a message, it will only be handed over to one of the consumers for processing; if two consumers do not belong to the same consumer group, then the message will be sent to them (broadcast) for processing.

After reading the above, have you mastered what the Apache Kafka framework looks like? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report