In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Warm Tip: to see the high-definition no-code picture, please open it with your mobile phone and click the picture to enlarge.
1. Purpose of document writing
This document focuses on how to use Sentry to manage permissions on Hive external tables, based on the following assumptions:
1. Operating system version: RedHat6.5
2.CM version: CM 5.11.1
3. Kerberos and Sentry are enabled in the cluster
4. Use ec2-user users with sudo privileges to operate
two。 Pre-preparation
2.1 create an external table data parent directory
1. Log in to Kerberos using the hive user
[root@ip-172-31-8-141 1874-hive-HIVESERVER2] # kinit-kt hive.keytab hive/ip-172-31-8-141.ap-southeast-1.compute.internal@CLOUDERA.COM [root@ip-172-31-8-141 1874-hive-HIVESERVER2] # klistTicket cache: FILE:/tmp/krb5cc_0Default principal: hive/ip-172-31-8-141.ap-southeast-1.compute.internal@CLOUDERA.COMValid starting Expires Service principal09/01 / 17 11:10:54 09 1874-hive-HIVESERVER2 02 1874-hive-HIVESERVER2 17 11:10:54 krbtgt/CLOUDERA.COM@CLOUDERA.COM renew until 06 1874-hive-HIVESERVER2 17 11:10: 54 [root @ root-31-8-141] #
two。 Create a HDFS directory
Use the following command to create the data directory / extwarehouse of the Hive external table under the root directory of HDFS
[root@ip-172-31-8-141 ec2-user] # hadoop fs-mkdir / extwarehouse [root @ ip-172-31-8-141 ec2-user] # hadoop fs-ls / drwxr-xr-x-hive supergroup 0 2017-09-01 11:27 / extwarehousedrwxrwxrwx-user_r supergroup 0 2017-08-23 03:23 / faysondrwx--hbase hbase 0 2017-09-01 02:59 / hbasedrwxrwxrwt -hdfs supergroup 0 2017-08-31 06:18 / tmpdrwxrwxrwx-hdfs supergroup 0 2017-08-30 03:48 / user [root @ ip-172-31-8-141 ec2-user] # hadoop fs-chown hive:hive / extwarehouse [root @ ip-172-31-8-141 ec2-user] # hadoop fs-chmod 771 / extwarehouse [root @ ip-172-31-8-141 ec2-user] # hadoop fs-ls / drwxrwx--x-hive Hive 0 2017-09-01 11:27 / extwarehousedrwxrwxrwx-user_r supergroup 0 2017-08-23 03:23 / faysondrwx--hbase hbase 0 2017-09-01 02:59 / hbasedrwxrwxrwt-hdfs supergroup 0 2017-08-31 06:18 / tmpdrwxrwxrwx-hdfs supergroup 0 2017-08-30 03:48 / user [ip-172-31-8-141Root] #
2.2 configure ACL synchronization for the parent directory of external table data
1. Make sure HDFS has sentry enabled and ACL synchronization enabled
two。 Configure the sentry synchronization path (Hive external table data directory created in 2.1)
3. Configuration is complete, restart the service.
3. Create an Hive external table
1. Use the beeline command line to connect to hive to create an external Hive table
Build a table sentence:
Create external table if not exists student (name string, age int, addr string) ROW FORMAT DELIMITED FIELDS TERMINATED BY', 'LOCATION' / extwarehouse/student'
Terminal operation:
[root@ip-172-31-8-141 1874-hive-HIVESERVER2] # beeline Beeline version 1.1.0-cdh6.11.1 by Apache Hivebeeline >! connect jdbc:hive2://localhost:10000/;principal=hive/ip-172-31-8-141.ap-southeast-1.compute.internal@CLOUDERA.COM...0: jdbc:hive2://localhost:10000/ > create external table if not exists student (. . . . . . . . . . . . . . . . > name string,. . . . . . . . . . . . . . . . > age int,. . . . . . . . . . . . . . . . > addr string. . . . . . . . . . . . . . . . >. . . . . . . . . . . . . . . . > ROW FORMAT DELIMITED FIELDS TERMINATED BY','. . . . . . . . . . . . . . . . > LOCATION'/ extwarehouse/student';...INFO: OKNo rows affected (0.236 seconds) 0: jdbc:hive2://localhost:10000/ >
two。 To load data in the student table
Prepare test data
[root@ip-172-31-8-141 student] # pwd/home/ec2-user/student [root@ip-172-31-8-141 student] # lltotal 4Murray Rwmurk-1 root root 39 Sep 1 11:37 student.txt [root @ ip-172-31-8-141 student] # cat student.txt zhangsan,18,guangzhoulisi,20, Shenzen [root @ ip-172-31-8-141 student] #
Put the student.txt file to the / tmp/student directory of hdfs
[root@ip-172-31-8-141 student] # hadoop fs-mkdir / tmp/ student [root @ ip-172-31-8-141 student] # lltotal 4When Rwashi Rafael-1 hive hive 39 Sep 1 11:37 student.txt [root @ ip-172-31-8-141 student] # hadoop fs-put student.txt / tmp/ student [root @ ip-172-31-8-141 student] # hadoop fs-ls / tmp/studentFound 1 items-rw-r--r- -3 hive supergroup 39 2017-09-01 11:57 / tmp/student/ student.txt [root @ ip-172-31-8-141 student] #
Under the beeline command line, load the data to the student table
0: jdbc:hive2://localhost:10000/ > load data inpath'/ tmp/student' into table student;...INFO: Table default.student stats: [numFiles=1, totalSize=39] INFO: Completed executing command (queryId=hive_20170901115858_5a76aa76-1b24-40ce-8254-42991856c05b); Time taken: 0.263 secondsINFO: OKNo rows affected (0.41 seconds) 0: jdbc:hive2://localhost:10000/ >
After executing the load command, view the table data
0: jdbc:hive2://localhost:10000/ > select * from student ... INFO: OK+--+ | student.name | student.age | student.addr | +-- + | zhangsan | | 18 | guangzhou | | lisi | 20 | shenzhen | +-+ 2 rows selected (0.288 seconds) 0: jdbc:hive2://localhost:10000/ > |
4. Use fayson users to view in beeline and impala-shell
Initialize the ticket for Kerberors with the principal of the fayson user
[ec2-user@ip-172-31-8-141 cdh-shell-master] $kinit faysonPassword for fayson@CLOUDERA.COM: [ec2-user@ip-172-31-8-141 cdh-shell-master] $klistTicket cache: FILE:/tmp/krb5cc_500Default principal: fayson@CLOUDERA.COMValid starting Expires Service principal09/01/17 12:27:39 09 cdh-shell-master 02 krbtgt/CLOUDERA.COM@CLOUDERA.COM renew until 17 12:27:39 27:39 [ec2-user@ip-172-31-8-141 cdh-shell-master] $
4.1 access to the hdfs directory
[ec2-user@ip-172-31-8-141141] $hadoop fs-ls / extwarehouse/studentls: Permission denied: user=fayson, access=READ_EXECUTE, inode= "/ extwarehouse/student": hive:hive:drwxrwx--x [ec2-user@ip-172-31-8-141141B] $
4.2beeline Command Line View
[ec2-user@ip-172-31-8-141141] $beeline Beeline version 1.1.0-cdh6.11.1 by Apache Hivebeeline >! connect jdbc:hive2://localhost:10000/ Principal=hive/ip-172-31-8-141.ap-southeast-1.compute.internal@CLOUDERA.COM...INFO: OK+-+--+ | tab_name | +-- + No rows selected (0.295 seconds) 0: jdbc:hive2://localhost:10000/ > select * from student Error: Error while compiling statement: FAILED: SemanticException No valid privileges User fayson does not have privileges for QUERY The required privileges: Server=server1- > Db=default- > Table=student- > Column=addr- > action=select; (state=42000,code=40000) 0: jdbc:hive2://localhost:10000/ >
4.3impala-shell Command Line View
[ec2-user@ip-172-31-8-141 cdh-shell-master] $impala-shell. [Not connected] > connect ip-172-31-10-156.APQUR SoutheastLili1.compute.Compact Rod 21000There connected to ip-172-31-10-156.ap-southeast-1.compute.internal:21000Server version: impalad version 2.8.0-cdh6.11.1 RELEASE (build 3382c1c488dff12d5ca8d049d2b59babee605b4e) [ip-172-31-10-156.ap-southeast-1.compute.internal:21000] > show tables Query: show tablesERROR: AuthorizationException: User 'fayson@CLOUDERA.COM' does not have privileges to access: default.* [IP-17231-10-156.ap-southeast-1.compute.internal:21000] > select * from student Query: select * from studentQuery submitted at: 2017-09-01 12:33:06 (Coordinator: http://ip-172-31-10-156.ap-southeast-1.compute.internal:25000)ERROR: AuthorizationException: User 'fayson@CLOUDERA.COM' does not have privileges to execute' SELECT' on: default. Student [IP-31-10-156.ap-southeast-1.compute.internal:21000] >
4.4 Test Summary
Through the external table created by the hive user, the fayson user does not have the permission to access the hdfs (/ extwarehouse/student) data directory without granting the student table read permission, and the fayson user does not have the right to query the student table data under the beeline and impala-shell command lines.
5. Give fayson users read access to student tables
Note: the following operations are performed under the hive administrator user.
1. Create a student_read role
0: jdbc:hive2://localhost:10000/ > create role student_read;...INFO: Executing command (queryId=hive_20170901124848_927878ba-0217-4a32-a508-bf29fed67be8): create role student_read...INFO: OKNo rows affected (0.104 seconds) 0: jdbc:hive2://localhost:10000/ >
two。 Authorize query permissions on the student table to the student_read role
0: jdbc:hive2://localhost:10000/ > grant select on table student to role student_read;...INFO: Executing command (queryId=hive_20170901125252_8702d99d-d8eb-424e-929d-5df352828e2c): grant select on table student to role student_read...INFO: OKNo rows affected (0.111 seconds) 0: jdbc:hive2://localhost:10000/ >
3. Authorize the student_read role to the fayson user group
0: jdbc:hive2://localhost:10000/ > grant role student_read to group fayson;...INFO: Executing command (queryId=hive_20170901125454_5f27a87e-2f63-46d9-9cce-6f346a0c415c): grant role student_read to group fayson...INFO: OKNo rows affected (0.122 seconds) 0: jdbc:hive2://localhost:10000/ >
6. Test again
Log in to Kerberos using the fayson user
6.1 access to the HDFS directory
Access the hdfs directory / extwarehouse/student where the student data is located
[ec2-user@ip-172-31-8-141141] $hadoop fs-ls / extwarehouse/studentFound 1 items-rwxrwx--x+ 3 hive hive 39 2017-09-01 14:42 / extwarehouse/student/student.txt [ec2-user@ip-172-31-8-141141B] $
6.2beeline query student table
[ec2-user@ip-172-31-8-141141B] $klistTicket cache: FILE:/tmp/krb5cc_500Default principal: fayson@CLOUDERA.COMValid starting Expires Service principal09/01/17 12:58:59 09 krbtgt/CLOUDERA.COM@CLOUDERA.COM renew until 02max 17 12:58:59 krbtgt/CLOUDERA.COM@CLOUDERA.COM renew until 09Compare 08 17 12:58:59 [ec2-user@ip-172-31-8-141141B] $[ec2-user@ip-172-31-8-141N] ~] $beeline Beeline version 1.1.0-cdh6.11.1 by Apache Hivebeeline >! connect jdbc:hive2://localhost:10000/ Principal=hive/ip-172-31-8-141.ap-southeast-1.compute.internal@CLOUDERA.COM...INFO: OK+-+--+ | tab_name | +-+-+ | student | +-+-- + 1 row selected (0.294 seconds) 0: jdbc:hive2://localhost:10000/ > select * from student ... INFO: OK+--+ | student.name | student.age | student.addr | +-- + | zhangsan | | 18 | guangzhou | | lisi | 20 | shenzhen | +-+ 2 rows selected (0.241 seconds) 0: jdbc:hive2://localhost:10000/ > |
6.3impala-shell query student table
[ec2-user@ip-172-31-8-141 cdh-shell-master] $klistTicket cache: FILE:/tmp/krb5cc_500Default principal: fayson@CLOUDERA.COMValid starting Expires Service principal09/01/17 12:58:59 09 cdh-shell-master 17 12:58:59 krbtgt/CLOUDERA.COM@CLOUDERA.COM renew until 09 cdh-shell-master 08 cdh-shell-master 17 12:58:59 [ec2-user@ip-172-31-8-141 cdh-shell-master] $impala-shell. [Not connected] > connect ip-172 31-10-156.ap-southeast-1.compute.internal:21000 Connected to ip-172-31-10-156.ap-southeast-1.compute.internal:21000Server version: impalad version 2.8.0-cdh6.11.1 RELEASE (build 3382c1c488dff12d5ca8d049d2b59babee605b4e) [ip-172-31-10-156.ap-southeast-1.compute.internal:21000] > show tables Query: show tables+-+ | name | +-+ | student | +-+ Fetched 1 row (s) in 0.02s [IP-172-31-10-156.ap-southeast-1.compute.internal:21000] > select * from student ... +-+ | name | age | addr | +-+ | zhangsan | 18 | guangzhou | | lisi | 20 | shenzhen | +-+ Fetched 2 row (s) In 0.13s [IP-172-31-10-156.ap-southeast-1.compute.internal:21000] >
6.4 Test Summary
Through the external table created by hive users, fayson users can normally access the (/ extwarehouse/student) data directory of hdfs after giving student table read permission, and fayson users can query student table data under the beeline and impala-shell command lines.
Summary of permissions for 7.Sentry Management Hive external Table
When ACL synchronization of the data parent directory of an external table is enabled, there is no need for separate permissions to maintain the data directory of the external table.
Reference documentation:
Https://www.cloudera.com/documentation/enterprise/latest/topics/sg\_hdfs\_sentry\_sync.html
Drunken whips are famous horses, and teenagers are so pompous! Lingnan Huan Xisha, under the vomiting liquor store! The best friend refuses to let go, the flower of data play!
Warm Tip: to see the high-definition no-code picture, please open it with your mobile phone and click the picture to enlarge.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 268
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.