In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
* * Kafka cluster configuration SASL+ACL
Test environment: * *
System: CentOS 6.5x86x64 JDK: java version 1.8.0mm 121 kafka: kafka_2.11-1.0.0.tgz zookeeper: 3.4.5 ip: 192.168.49.161 (we deploy the entire environment here on one machine)
Analysis of kafka nouns:
Broker: the Kafka cluster contains one or more servers, which are called broker Topic: every message published to the Kafka cluster has a category called Topic. (physically, messages with different Topic are stored separately. Logically, messages from a Topic are stored on one or more broker, but users only need to specify the Topic of the message to produce or consume data without caring about where the data is stored.) Partition: Parition is a physical concept, each Topic contains one or more Partition Producer: responsible for publishing messages to Kafka broker, that is, producer Consumer: message consumer The client that reads messages to the Kafka broker is the consumer Consumer Group: each Consumer belongs to a specific Consumer Group (you can specify a group name for each Consumer, or the default group if you don't specify group name)
Kafka topology (borrowing other people's diagrams)
A typical Kafka cluster contains several Producer (can be Page View generated by the web front end, or server log, system CPU, Memory, etc.), several broker (Kafka supports horizontal scaling, generally, the more the number of broker, the higher the cluster throughput), several Consumer Group, and a Zookeeper cluster. Kafka manages the cluster configuration through Zookeeper, elects leader, and rebalance when the Consumer Group changes. Producer publishes messages to broker,Consumer using push mode subscribes and consumes messages from broker using pull mode
Kafka cluster deployment
A Zookeeper cluster deployment (pseudo-cluster deployed here)
Zookeeper deployment is relatively simple, and can be used after decompressing and modifying the configuration file. The default configuration file is zoo_sample.cfg. Here, we directly create a new zoo.cfg file as follows:
The meaning of each parameter is not explained in detail here. If you want to understand it, you can refer to the official website.
TickTime=2000 initLimit=5 syncLimit=2 dataDir=/opt/zook/zookeeper1/data / / specify data path dataLogDir=/opt/zook/zookeeper1/logs / / specify log path ClientPort=2181 / / listening port server.1=127.0.0.1:7771:7772 server.2=127.0.0.1:7771:7772 server.3=127.0.0.1:7771:7772 2. Create a file called myid under zookeeper/data and enter the number 1 3. 0 in the file. For cluster deployment, copy two more copies of the above configuration of a single node, and configure the two parameters ```dataDir and dataLogDir in the configuration file according to the actual path to 2182 and 2183 myid respectively and configure them to 23 ```4 respectively. Start the cluster
II. Kafka cluster deployment
Since the cluster is ultimately open to external networks, SASL+ACL control and authorization are adopted in this paper for security reasons.
1. Broker configuration
1.1. Make 2 copies of the configuration server.properties and rename it to server-1.properties server-2.properties server-3.properties to modify the server.properties configuration: (please fill in according to the actual environment)
Broker.id = 1 / / the other two nodes are 2 3host.name=192.168.49.161log.dirs=/tmp/kafka-logs-1 / / the other two nodes are kafka-logs-2 kafka-logs-3 and can not be placed under / tmp If you put it somewhere else, you must pay attention to the directory permissions zookeeper.connect=192.168.49.161:2181192.168.49.161:2182192.168.49.161:2183port=9092 / / the other two nodes are 9093 9094 or you can customize listeners=SASL_PLAINTEXT://192.168.49.161:9092 / / the other two nodes are 9093 9094
1.2 to configure SASL and ACL, two settings need to be made on the broker side.
The first is to create a JAAS file containing all the authenticated user information. In this example, we have two admin qjld users, where admin is used by the administrator to authenticate between broker clusters, and qjld is the authenticated user for remote applications. Add authentication information files.
Kafka_server_jaas.conf: KafkaServer {org.apache.kafka.common.security.plain.PlainLoginModule required username= "admin" password= "admin-mgr998778123" user_admin= "admin-mgr998778123" user_qjld= "123456";}
In this example, the path is: / opt/kafka_2.11-1.0.0/config/kafka_server_jaas.conf. We need to pass the contents of the configuration file to JVM, so we need to modify the / opt/kafka_2.11-1.0.0/bin/kafka-server-start.sh script file as follows:
Add export KAFKA_OPTS= "- Djava.security.auth.login.config=/opt/kafka_2.11-1.0.0/config/kafka_server_jaas.conf" on the line before exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@" and save the exit.
Second, we want to add to all broker configuration files (server-x.properties):
# ACL entry class authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer#SASL_PLAINTEXTsasl.enabled.mechanisms=PLAINsecurity.inter.broker.protocol=SASL_PLAINTEXTsasl.mechanism.inter.broker.protocol=PLAINallow.everyone.if.no.acl.found=true# sets admin to superuser super.users=User:admin
Now we can start the broker test.
Bin/kafka-server-start.sh config/server-1.properties & bin/kafka-server-start.sh config/server-2.properties & bin/kafka-server-start.sh config/server-3.properties &
After starting successfully, broker can receive authenticated client connections. Let's configure the client.
/ / create a topic named testbin/kafka-topics.sh--create-- zookeeper 192.168.49.161 topic 2182, 192.168.49.161 replication-factor 2183-- replication-factor 1-- partitions 1-- topic test / / check whether topic has successfully created bin/kafka-topics.sh-- list-- zookeeper 192.168.49.161
2. Client configuration (for connecting using the kafka command)
2.1 add a configuration file such as / opt/kafka_2.11-1.0.0/config/kafka_client_jaas.conf, which reads:
KafkaClient {org.apache.kafka.common.security.plain.PlainLoginModule requiredusername= qjld "password=" 123456 ";}
Similarly, we also need to pass the contents of the configuration file to JVM, so we need to modify
The / opt/kafka_2.11-1.0.0/bin/kafka-console-producer.sh script file is as follows: add export KAFKA_OPTS= "- Djava.security.auth.login.config=/opt/kafka_2.11-1.0.0/config/kafka_client_jaas.conf" the same way to modify kafka-console-consumer.sh before exec $base_dir/kafka-run-class.sh$EXTRA_ARGS kafka.Kafka "$@"
2.2 need to be appended to the config/consumer.properties and config/producer.properties configuration files:
Security.protocol=SASL_PLAINTEXTsasl.mechanism=PLAIN
These two lines are configured and run again after completion:
Bin/kafka-console-producer.sh-- broker-list 192.168.49.161 producer.config config/producer.propertiesbin/kafka-console-consumer.sh 9092-- topic test\-- producer.config config/producer.propertiesbin/kafka-console-consumer.sh-- bootstrap-server 192.168.49.161 producer.config config/producer.propertiesbin/kafka-console-consumer.sh 9092-- topic test--from-beginning-- consumer.config config/consumer.properties
Next, test the write and read of the message, open the two terminals and enter the following
Bin/kafka-console-producer.sh-- broker-list 192.168.49.161 bin/kafka-console-consumer.sh-- topic test / / message writing bin/kafka-console-consumer.sh-- bootstrap-server 192.168.49.161 bin/kafka-console-consumer.sh-- topic test-- from-beginning message reading
If the error message is as follows:
WARN Bootstrap broker 192.168.49.161 disconnected 9092 (org.apache.kafka.clients.NetworkClient)
It means that the configured security is in effect. For ordinary users to read and write messages, you need to configure ACL.
2.3 configure ACL
Bin/kafka-acls.sh-- authorizer kafka.security.auth.SimpleAclAuthorizer-- authorizer-propertieszookeeper.connect=192.168.49.161:2181192.168.49.161:2182192.168.49.161:2183-- add--allow-principal User:qjld-- operation All-- topic test```This is to configure all permissions for the user qjld. If you have more detailed requirements, please refer to the For example, Read Write and others only need to change-- operation all to-- operation Read and then add a-- operation write after it * * three tests * * here I choose the script written by python2.7 to test and install pip27 install kafka-python first! [] (https://s1.51cto.com/images/blog/201801/15/5b755d258b838b83d79753ad5aad3d44.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10, Shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.