In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail how to configure Kafka cluster to use PAM backend. The content of the article is of high quality, so Xiaobian shares it with you as a reference. I hope you have a certain understanding of relevant knowledge after reading this article.
We'll look at how to configure Kafka clusters to use PAM backends instead of LDAP backends.
The example shown here highlights authentication-related attributes in bold to distinguish them from other required security attributes, as shown in the following example. It is assumed that TLS is enabled for Apache Kafka clusters and should be enabled for each security cluster.
security.protocol=SASL_SSL
ssl.truststore.location=/opt/cloudera/security/jks/truststore.jks.truststore.location=/opt/cloudera/security/jks/truststore.jks
We use kafka-console-consumer in all of the following examples. All concepts and configurations apply to other applications as well.
PAM Verification
When a Kafka cluster is configured to perform PAM (Pluggable Authentication Module) authentication, Kafka delegates client authentication to PAM modules configured for the operating system it runs on.
The Kafka client configuration is identical to what we used for LDAP authentication, as we saw in the previous article:
# Uses SASL/PLAIN over a TLS encrypted connectionsecurity.protocol=SASL_SSL.protocol=SASL_SSLsasl.mechanism=PLAIN.mechanism=PLAIN# LDAP credentialssasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="alice" password="supersecret1";.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="alice" password="supersecret1";# TLS truststoressl.truststore.location=/opt/cloudera/security/jks/truststore.jks.truststore.location=/opt/cloudera/security/jks/truststore.jks
The above configuration uses SASL/PLAIN for authentication and TLS (SSL) for data encryption. The option for PAM authentication is configured on SASL/PLAIN's server-side handler, which we'll cover later in this section.
Ensure TLS/SSL encryption is being used
Similar to LDAP authentication, since usernames and passwords are sent over the network for client authentication, it is important to enable and enforce TLS encryption for all communications between Kafka clients. This ensures that credentials are always encrypted through the network and are not compromised.
All Kafka proxies must be configured to use the SASL_SSL security protocol for their SASL endpoints.
other requirements
Depending on the PAM module configured in your system, there may be additional requirements that need to be configured correctly for PAM authentication to work. The exact configuration depends on the module used and is outside the scope of this document.
The following are simple examples of two additional configurations that may be required when using certain PAM modules:
If the pam_unix module of the login service is to be used, the kafka user (the user running the Kafka agent) must have access to the/etc/shadow file for authentication to work.
The following command is just a simple example of how to achieve this on a single node. There may be better ways to ensure that the entire cluster meets this requirement.
usermod -G shadow kafka-G shadow kafkachgrp shadow /etc/shadow/etc/shadowchmod 440 /etc/shadow 440 /etc/shadow
If the pam_nologin module is used, the presence of the file/var/run/nologin on the proxy prevents Kafka's PAM authentication from working properly. For PAM authentication to work properly, the file/var/run/nologin must be removed from all agents, or the pam_nologin module must be disabled.
Enable PAM authentication on Kafka Broker
PAM authentication is not enabled for Kafka agents by default when installing the Kafka service, but configuring it through Cloudera Manager is simple:
In Cloudera Manager, set the following properties in the Kafka service configuration to match your environment: By selecting PAM as the SASL/PLAIN authentication option above, Cloudera Manager configures Kafka to use the following SASL/PLAIN callback handler:
org.apache.kafka.common.security.pam.internals.. apache.kafka.common.security.pam.internals.PamPlainServerCallbackHandler
Configure PAM services to be used for authentication:
Click Kafka> Actions> Restart to restart the Kafka service and have the changes take effect.
example
Note: The following information contains sensitive credentials. When storing this configuration in a file, make sure that file permissions are set so that only the file owner can read it.
The following is an example of reading from a topic with PAM authentication using the Kafka console consumer.
$ cat pam-client.properties-client.propertiessecurity.protocol=SASL_SSL.protocol=SASL_SSLsasl.mechanism=PLAIN.mechanism=PLAINsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="alice" password="supersecret1";.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="alice" password="supersecret1";ssl.truststore.location=/opt/cloudera/security/jks/truststore.jks.truststore.location=/opt/cloudera/security/jks/truststore.jks
$ kafka-console-consumer \-console-consumer \ --bootstrap-server host-1.example.com:9093 \--bootstrap-server host-1.example.com:9093 \ --topic test \--topic test \ --consumer.config ./ pam-client.properties--consumer.config ./ pam-client.properties
About how to configure Kafka cluster to use PAM backend to share here, I hope the above content can be of some help to everyone, you can learn more knowledge. If you think the article is good, you can share it so that more people can see it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.