Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Detailed explanation of the Advanced part of Linux Operation and maintenance engineer (big data Security Direction)

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Detailed explanation of the Advanced part of linux Operation and maintenance engineer (big data Security Direction)

Hadoop Security Directory:

Kerberos (released)

Elasticsearch (released) https://blog.51cto.com/chenhao6/2113873

Knox

Oozie

Ranger

Apache sentry

Brief introduction:

From operation and maintenance bronze to operation and maintenance silver to operation and maintenance gold, the question of direction is involved here, that is, equipment. According to their own hobbies, everyone should choose a professional and technical direction that suits and likes them, such as big data security, development, operation and maintenance, cloud computing operation and maintenance, and so on. The more preface technology you master, that is, more equipment, the better you can mix in the it industry. After all, it technology is updated too fast, which has been introduced in the elementary and intermediate articles.

Preliminary chapter: a detailed explanation of the essential skills for Linux operation and maintenance engineers (bronze)

Intermediate section: detailed explanation of Linux operation and maintenance engineer to fight monster upgrade (Baiyin)

Now I would like to introduce to you the official face of big data's security:

1. Big data basic components

2. Hadoop security background

 shared cluster

 divides resource queues according to business or application rules and assigns them to

Specific user

 HDFS stores all kinds of data, including public and confidential

 security authentication: make sure that a user is who he claims to be

 Security Authorization: ensure that a user can only do what he or she allows

3. Device description

Service

IP

Hostnam

System

Ambari

Kerberos

192.168.2.140

Hdp140

CentOS 7.3

Namenode

192.168.2.141

Hdp141

CentOS 7.3

Datanode

192.168.2.142

Hdp142

CentOS 7.3

Datanode

192.168.2.143

Hdp143

CentOS 7.3

4. Basic concepts of kerberos:

 Principal: an authenticated individual with a name and password

 KDC (key distribution center): is a network service that provides ticket and temporary session keys

 Ticket: a ticket that customers use to prove their identity to the server, including customer identification, session key, and timestamp.

 AS (Authentication Server): authentication server

 TGS (Ticket Granting Server): license server

 TGT:Ticket-granting Ticket

5. Kerberos authentication process:

6. Cluster enables Kerberos authentication

Install KDC Server

1. Install a new KDC Server (any cluster host, here hdp141 is an example)

# yum install krb5-server krb5-libs krb5-workstation

two。 Open the configuration file for KDC Server

# vi / etc/krb5.conf

Modify the [realms] section of the file to replace the default value "kerberos.example.com" set for the properties kdc and admin_server with the hostname of the actual KDC server. In the following example, "kerberos.example.com" is replaced with "my.kdc.server".

[realms] EXAMPLE.COM = {kdc = my.kdc.serveradmin_server = my.kdc.server}

3. (optional) Custom realms configuration (EXAMPLE.COM is changed to CESHI.COM, the following examples are all CESHI.COM)

# vi / etc/krb5.conf [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = CESHI.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24 h renew_lifetime = 7d forwardable = true [realms] CESHI.COM = {kdc = hdp141 admin_server = hdp141} [domain_realm] .vrv.com = CESHI.COM vrv .com = CESHI.COM# vi / var/kerberos/krb5kdc/kdc.conf [kdcdefaults] kdc_ports = 88 kdc_tcp_ports = 88 [realms] CESHI.COM = {# master_key_type = aes256-cts acl_file = / var/kerberos/krb5kdc/kadm5.acl dict_file = / usr/share/dict/words admin_keytab = / var/kerberos/krb5kdc/kadm5.keytab supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256- Cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal}

4. Create a Kerberos database

Master key is required during the creation process.

# kdb5_util create-sLoading random dataInitializing database'/ var/kerberos/krb5kdc/principal' for realm 'CESHI.COM',master key name' K/M@CESHI.COM'You will be prompted for the database Master Password.It is important that you NOT FORGET this password.Enter KDC database master key: ceshi123456.Re-enter KDC database master key to verify: ceshi123456.

5. Start KDC

# service krb5kdc start# chkconfig krb5kdc on# service kadmin start# chkconfig kadmin on

6. Create kerberos Admin

To create a KDC admin by creating an admin principal, you need to enter the password for the principal.

# kadmin.local-Q "addprinc admin/admin" Authenticating as principal root/admin@CESHI.COM.COM with password.WARNING: no policy specified for admin/admin@CESHI.COM.COM; defaulting to no policyEnter password for principal "admin/admin@CESHI.COM.COM": ceshi123456.Re-enter password for principal "admin/admin@CESHI.COM.COM": ceshi123456.Principal "admin/admin@CESHI.COM" created. "admin/admin@CESHI.COM": ceshi123456.

Open the KDC ACL file and confirm that admin principal has permissions in KDC ACL. If there is no corresponding domain, you need to add it.

# vi / var/kerberos/krb5kdc/kadm5.acl*/admin@VRV.COM *

If you modify the file kadm5.acl, you must restart the kadmin process

# service kadmin restart

7. Enable Kerberos protection

Install JCE

The existing local JCE must be overwritten with the JCE downloaded from the official website, otherwise the encryption method for Kerberos will be missing.

On the host where the Ambari server resides and all hosts in the cluster, select the appropriate JDK policy file based on the version of JCE used.

Oracle JDK 1.7:

Http://www.oracle.com/technetwork/java/javase/downloads/jce-7-

Download-432124.html

Oracle JDK 1.8:

Http://www.oracle.com/technetwork/java/javase/downloads/jce8-

Download-2133166.html

On the host where the Ambari Server resides and on all hosts in the cluster, add unlimited security policy JCE jars

Go to the directory $AMBARI_SERVER_JAVA_HOME/jre/lib/security/.

Note: on all hosts, JCE-related packages must be extracted to the JDK directory specified by the attribute java.home in the configuration file / etc/ambari-server/conf/ambari.properties

# JAVA_HOME=/usr/java/default# unzip-o-j-Q UnlimitedJCEPolicyJDK8.zip-d $JAVA_HOME/jre/lib/security/

Restart Ambari Server (ambari server server hdp140)

# service ambari-server restart

8. Run the Kerberos Protection Wizard

1. Verify that KDC is secure and configured correctly, and that JCE has been configured on all hosts in the cluster.

two。 Log in to Ambari Web and open Administrator > Kerberos

3. Click enable Kerberos, enable the installation wizard, and select conditional check

4. Provide information about KDC and administrator accounts

For more information about KDC, please refer to the configuration file / etc/krb5.conf

5.ambari installs the Kerberos client on the host of the cluster, and then tests whether the principal can be created, generated keytab, and assigned Keytab to test whether the KDC can be connected.

Customize the Kerberos identities used by Hadoop

6. Confirm your configuration. You can download the automatically created CSV file containing principals and Keytabs from the page.

7. Out of Service

8. Enable kerberos

The Keytabs is saved in the / etc/security/keytabs directory of the host.

9. Start and test services

Click finish after successfully starting and testing the service to end the activation of Kerberos.

10. View enabled Kerberos configuration

Here the kerberos installation is complete.

Advanced options:

Set Kerberos for Ambari Server (optional)

1. Use kadmin to create a principal for Ambari Server on the host where your KDC is located (hdp141). (ambari-server is a custom name)

# kadmin.local-Q "addprinc-randkey ambari-server@CESHI.COM

two。 Generate a Keytab for this principal

# kadmin.local-Q "xst-k ambari.server.keytab ambari-server@CESHI.COM"

3. Copy the Keytab generated by the single front directory to the cluster where the Ambari Server resides. Make sure that the file has the appropriate permissions and can be accessed by the startup Ambari Server daemon.

# scp ambari.server.keytab hdp140:/etc/security/keytabs/# ll / etc/security/keytabs/ambari.server.keytab-r--r- 1 root root 530 Dec 18 20:06 / etc/security/keytabs/ambari.server.keytab

4. Stop ambari server

# ambari-server stop

5. Run the setup-security command to set up JAAS. The red part is the part that needs to be set.

a. Select 3pm setup Ambari kerberos JAAS configuration

b. Enter the principal name set for Ambari Server in the first step

c. Enter the path to the Keytab of the Ambari principal

# ambari-server setup-securityUsing python / usr/bin/python2Security setup options...====Choose one of the following options: [1] Enable HTTPS for Ambari server. [2] Encrypt passwords stored in ambari.properties file. [3] Setup Ambari kerberos JAAS configuration. [4] Setup truststore. Import certificate to truststore.====Enter choice, (1-5): 3Setting up Ambari kerberos JAAS configuration to access secured Hadoop daemons...Enter ambari server's kerberos principal name (ambari@VRV.COM): ambari-server@VRV.COMEnter keytab path for ambari server's kerberos principal: / etc/security/keytabs/ambari.server.keytabAmbari Server 'setup-security' completed successfully. Restart Ambari Server# ambari-server restart

Start the actual measurement:

1. New test user

Ordinary users need to install ranger (described later) administrative privileges.

List all users

# kadmin.local # execute kadmin.local on the kdc server: listprincs # / / list all user ambari-server@CESHI.COM.nn/hdp140@CESHI.COMzookeeper/hdp142@CESHI.COMzookeeper/hdp143@CESHI.COM

Create a test user

Kadmin.local: addprinc testEnter password for principal "test@CESHI.COM" ceshi123456.Re-enter password for principal "test@CESHI.COM": ceshi123456.Principal "test@CESHI.COM" created.

Login authentication

# kinit test # Login

Ceshi123456.

Exit login status

Log out: kdestroy

Cluster login and authorization (hdfs users)

Execute before using kerberos user authentication

# hadoop dfs-ls /

Use kerberos user authentication

# kinit test # log in to Password for test@CESHI.com # ceshi123456.# hadoop dfs-ls /

At this time, test users have view permission by default and no directory authorization.

Change to hdfs user and initialize hdfs

View the Kerberos user name of hdfs

# klist-k / etc/security/keytabs/hdfs.headless.keytabKeytab name: FILE:hdfs.headless.keytabKVNO Principal---- 1 hdfs-test@CESHI.COM 1 hdfs-test@ CESHI.COM 1 hdfs-test@CESHI.COM initialize authentication hdfs user # kinit-k hdfs-test@CESHI.COM-t / etc/security/keytabs/hdfs.headless.keytab create directory: hadoop fs- mkdir / test view directory attributes:

Change directory properties: hadoop fs-chown test:hdfs / test

Log in using the test user

Change password and regenerate

# change password command cpw test (executed on KDC server) # kadmin.localAuthenticating as principal test/admin@CESHI.COM with password.kadmin.local: cpw testEnter password for principal "test1@CESHI.COM": ceshi123Re-enter password for principal "test1@CESHI.COM": ceshi123change_password: Principal does not exist while changing password for "test@CESHI.COM" .kadmin.local: exit

Generate a new multi-user keytab file

Create a keytab file (generate to the current folder)

Case: integrate the keytab of hive and hdfs into the same keytab file

1. View all princs

# kadmin.localKadmin.local: listprincshbase/hdp143@CESHI.COM "hdfs-vrvtest@CESHI.COM" hive/hdp140@CESHI.COM

two。 Add the keytab of hdfs's princs to hdfs-hive.keytab

# kadmin.local Kadmin.local: xst-norandkey-k hdfs-hive.keytab hdfs-vrvtest@CESHI.COM

3. Add the keytab of hive's princs to hdfs-hive.keytab

# kadmin.local Kadmin.local: xst-norandkey-k hdfs-hive.keytab hive/hdp140@CESHI.COM

View the generated hdfs-hive.keytab

Log in using the generated Keytab file

# kinit-k-t hdfs-hive.keytab hive/hdp140@CESHI.COM

Modify the lease term

1. Modify global lease

# vi / etc/krb5.conf [libdefaults] default_realm = CESHI.COMdns_lookup_realm = false dns_lookup_kdc = falseticket_lifetime = 24 hours # ticket lease time renew_lifetime = 7d # reapplication time frwardable = true

# restart

# service krb5kdc restart# service kadmin restart

two。 Manually modify the user lease time

# to view the lease period, you can use the getprinc command under the kadmin command line to check the default maximum duration, otherwise it is limited to 24 hours. And cannot renew) # kadmin.localkadmin.local:getprinc hive/hdp141Principal: hive/hdp141@CESHIExpiration date: [never] Last password change: Mon Dec 18 05:56:57 EST 2017Password expiration date: [none] Maximum ticket life: 1 day 00:00:00 # Lease time Maximum renewable life: 0 days 00:00:00 # Renewal time Last modified: Mon Dec 18 05:56:57 EST 2017 (admin/admin@VRV.COM) Last successful authentication: [never] Last failed authentication: [ Never] Failed password attempts: 0Number of keys: 8Key: vno 1 Aes256-cts-hmac-sha1-96Key: vno 1, aes128-cts-hmac-sha1-96Key: vno 1, des3-cbc-sha1Key: vno 1, arcfour-hmacKey: vno 1, camellia256-cts-cmacKey: vno 1, camellia128-cts-cmacKey: vno 1, des-hmac-sha1Key: vno 1, des-cbc-md5 # commands to change lease time (replaced by real user) modprinc-maxrenewlife 300days user modprinc-maxlife 300days user

# Application examples

Modprinc-maxrenewlife 300days hive/hdp141@CESHI.com

Modprinc-maxlife 300days hive/hdp141@CESHI.COM

Restart after exiting

# service krb5kdc restart# service kadmin restart

3. Use spark tasks to test job submissions under kerberos

1. Specify spark user and password

# cd / etc/security/keytabs [root@hdp140 keytabs] # ll-r--r- 1 root root 353 Oct 30 23:54 ambari.server.keytab-r--r- 1 hbase hadoop 313 Oct 30 23:54 hbase.headless.keytab-r- 1 hbase hadoop 313 Oct 30 23:54 hbase.service.keytab-r- 1 hdfs hadoop 308 Oct 30 23:54 hdfs.headless. Keytab-r--r- 1 hive hadoop 308 Oct 30 23:54 hive.service.keytab-r- 1 hdfs hadoop 298 Oct 30 23:54 nn.service.keytab-r--r- 1 ambari-qa hadoop 333 Oct 30 23:54 smokeuser.headless.keytab-r- 1 spark hadoop 313 Oct 30 23:54 spark.headless.keytab-r--r- 1 root hadoop 308 Oct 30 23:54 spnego.service.keytab# klist-k spark.headless.keytabKeytab name: FILE:spark.headless.keytabKVNO Principal---- 1 spark-test@CESHI.COM 1 Spark-test@CESHI.COM 1 spark-test@CESHI.COM # kinit-k spark-vrvtest@VRV.COM-t spark.headless.keytab# specifies the spark user [root@hdp140 keytabs] # klistTicket cache: FILE:/tmp/krb5cc_0Default principal: spark-vrvtest@VRV.COM Valid starting Expires Service principal10/31/2017 01:08:56 11 Greater 01:08:56 krbtgt/VRV.COM@VRV.COM

Upload spark files to opt

# hdfs dfs-mkdir-p / tmp/sparkwordcount/# hdfs dfs-mkdir-p / tmp/sparkwordcount/input# hdfs dfs-put / opt/sparkwordcountinput.txt / tmp/sparkwordcount/input# hdfs dfs-put / opt/spark_word_count.jar / tmp/sparkwordcount/

# spark test file

Sparkwordcountinput.txt spark_word_count.jar

Spark command to submit a task

# spark-submit\-class com.vrv.bigdata.ml.DataExtract2\-master yarn\-deploy-mode cluster\-principal spark-test@CESHI.COM\-keytab / etc/security/keytabs/spark.headless.keytab\ hdfs://hdp140:8020/tmp/sparkwordcount/spark_word_count.jar\ hdfs://hdp140:8020/tmp/sparkwordcount/input\ hdfs://hdp140:8020/tmp/sparkwordcount/output/spark_work_count17/10 / 31 01:15:28 INFO Client: client token: Token {kind: YARN_CLIENT_TOKEN Service:} diagnostics: Nash An ApplicationMaster host: 192.168.2.143 ApplicationMaster RPC port: 0 queue: default start time: 1509383715631 final status: SUCCEEDED tracking URL: http://hdp141:8088/proxy/application_1509379053332_0014/ user: spark17/10/31 01:15:28 INFO ShutdownHookManager: Shutdown hook called17/10/31 01:15:28 INFO ShutdownHookManager: Deleting directory / tmp/spark-40e868df-ca58-4389-b20c-03d2717516cc

Knotty question 1:

Received Exception while testing connectivity to the KDC: Algorithm AES256 not enabled**** Host: hdp261:88 (TCP) java.lang.IllegalArgumentException: Algorithm AES256 not enabledat sun.security.krb5.EncryptionKey. (EncryptionKey.java:286) at javax.security.auth.kerberos.KeyImpl. (KeyImpl.java

Resolve:

1. On the host where the Ambari server resides and all hosts in the cluster, select the appropriate JDK policy file based on the version of JCE used. Oracle JDK 1.7: http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html Oracle JDK 1.8: http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html

Knotty question 2:

Org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: Defective token detected (Mechanism level: GSSHeader did not find the right tag)

Resolve:

# kinit guestPassword for guest@CESHI.COM: ceshi123456. # klistTicket cache: FILE:/tmp/krb5cc_0Default principal: guest@CESHI.COM Valid starting Expires Service principal11/28/2017 18:30:48 11/29/2017 18:30:48 krbtgt/CESHI.COM@CESHI.COM11/28/2017 18:31:09 11/29/2017 18:30:48 HTTP/hdp140@11/28/2017 18:31:09 11/29/2017 18:30:48 HTTP/hdp140@CESHI.COM

Reference:

Http://blog.csdn.net/wulantian/article/details/42418231

Http://book.51cto.com/art/200907/140533.htm

This is the end of the actual combat. Update elasticsearcn security practice later.

Summary:

1. Hadoop clusters have a large number of nodes, so it is very difficult to configure and maintain a high-performance and stable hadoop cluster using kerberos system.

2. Hdfs in Hadoop is a file system, and the authentication and authorization of users are more complex, which is no less difficult than the user and group management of linux system.

With the addition of kerberos, the management of users and user groups is more complex, and usually a suitable user cannot access files on hdfs.

3. After Hadoop and kerberos, the original users and files may become invalid, resulting in data loss.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report