In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Hive adds ordinary users, giving only select permissions, no create,drop and other permissions.
Just received the request of the customer, proposed to add 4 ordinary hive users, with select authority, do not have the authority of createdatabase/table,dropdatabase/table, only amos users have all the rights of select,create,drop and revoke.
After a lot of searching, the final solution is:
1. First, amos users log in to hive and give themselves all permissions to the database dmp.
[amos@DMP-GATEWAY amos] $cd / opt/amos/hive/bin/ [amos@DMP-GATEWAY bin] $. / hivehive > grant all on database dmp to user amos
2. Linux add ordinary user mcduser1
Add mcduser1 users to centos6.7 system: useradd mcduser1
3. Modify the permissions of users on hadoop
Hadoop fs-chmod-R 777 / user/hive/warehousehadoop fs-chmod-R 777 / tmp
4. Modify the hive configuration file hive-site.xml, add permission control, and then restart the hive service: metastore,HiveServer2,hwi.
Hive.security.authorization.enabled true enableordisable the hive clientauthorization hive.security.authorization.createtable.owner.grants ALL theprivileges automatically granted to theownerwhenever a table gets created. Anexample like "select,drop" willgrant select and drop privilege to theowner of thetable
5. Log in to hive with superuser amos, and give select permission to ordinary user mcduser1.
[amos@DMP-GATEWAY amos] $cd / opt/amos/hive/bin/ [amos@DMP-GATEWAY bin] $. / hivehive > grant select on database dmp to user mcduser1
Note: if the permission is assigned to the error, you can delete the permission with revoke
Hive > revoke select on database dmp from user amos
6. The test shows that mcduser1 users use the mapreduce started by select count (*), but it will fail automatically. Finally, the error in yarn log is:
Diagnostics: Application application_1484125831039_0001 failed 3 times due to AM Containerfor appattempt_1484125831039_0001_000003 exited with exitCode:-1000For more detailed output, check application trackingpage: http://DMP-DEV01:8088/cluster/app/application_1484125831039_0001Then,click on links to logs of each attempt.Diagnostics: Application application_1484125831039_0001 initialization failed (exitCode=255) with output: User mcduser1 not foundFailing this attempt. Failing the application.
It turns out that there is a mcduser1 user on the gatewany server, but there is no such user on nodemanager. Use ansible to add this user to all node servers. Note that using useradd-s / sbin/nologin mcduser1, mcduser1 is not allowed to login on nodemanager.
[root@mcddmpfe01] # ansible amosDnNodes-m shell-a'useradd-s / sbin/nologin mcduser1'/opt/amos/python2.7/lib/python2.7/site-packages/pycrypto-2.6.1-py2.7-linux-x86_64.egg/Crypto/Util/number.py:57:PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp > = 5 to avoid timing attack vulnerability.mcddmpnode05 | SUCCESS | rc=0 > > mcddmpnode01 | SUCCESS | rc=0 > > mcddmpnode03 | SUCCESS | rc=0 > > mcddmpnode02 | SUCCESS | rc=0 > > mcddmpnode04 | SUCCESS | rc=0 > > mcddmpnode07 | SUCCESS | rc=0 > > mcddmpnode06 | SUCCESS | rc=0 > > mcddmpnode08 | SUCCESS | rc=0 > >
The test passed, and the average user can start the mapreduce program through select count (*) from table on hive.
Hive > select count (*) from store_master Query ID = hiveuser1_20170112122713_fea2188b-7e19-4a9a-896d-ec472c60d0caTotal jobs = 1Launching Job 1 out of 1Number of reduce tasks determined at compile time: 1In order to change the average load for a reducer (in bytes): sethive.exec.reducers.bytes.per.reducer=In order to limit the maximum number of reducers: sethive.exec.reducers.max=In order to set a constant number of reducers: setmapreduce.job.reduces=Starting Job = job_1484051373423_1338 Tracking URL = http://mcddmpfe02:8088/proxy/application_1484051373423_1338/Kill Command = / opt/amos/hadoop/bin/hadoop job-killjob_1484051373423_1338Hadoop job information for Stage-1: number of mappers: 1 Number of reducers: 12017-01-12 12 number of reducers 27 Stage-1 map 36289 Stage-1 map = 0%, reduce = 0%, 2017-01-12 12 12 14 14 14 48 349 Stage-1 map = 100%, reduce = 0%, Cumulative CPU4.01 sec2017-01-12 12 12 27 Swiss 59 130 Stage-1 map = 100% Cumulative CPU7.86 secMapReduce Total cumulative CPU time: 7 seconds 860 msecEnded Job = job_1484051373423_1338MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 7.86 sec HDFS Read: 0 HDFS Write: 0 SUCCESSTotal MapReduce CPU Time Spent: 7 seconds 860 msecOK2302Time taken: 47.764 seconds, Fetched: 1 row (s) hive > create database test Authorization failed:No privilege 'Create' found for outputs {}. Use SHOWGRANT to get more details.
8. As for the user's Hive operation log, it is currently recorded in the .hivehistory file in the user directory, namely / home/$user/.hivehistory.
For example, mcduser1 users' actions on the hive command line are logged in: / home/hiveuser1/.hivehistoy
9, now found a problem, that is, ordinary users can add permissions, this problem has not found a suitable solution, may need to develop and write a hook program.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.