Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hadoop Cluster (5) Hive installation

2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

As a DBA,hadoop family for many years, the most intimate product is hive. After all, the use of SQL is familiar. You don't have to worry about the pain of writing Mapreducer anymore.

First of all, let's give a brief introduction to Hive.

Hive is a Hadoop-based data warehouse solution. Because Hadoop itself has good scalability and high fault tolerance in data storage and computing, data warehouses built with Hive also inherit these characteristics.

To put it simply, Hive is a layer of SQL interface on Hadoop, which can translate SQL into MapReduce and execute on Hadoop, which makes it very convenient for data developers and analysts to use SQL to complete the statistics and analysis of massive data, without having to use programming language to develop MapReduce.

Let's start the installation of Hive, and the prerequisite for installing hive is that hdfs,yarn has been installed and started. For hdfs installation, please refer to

Hadoop Cluster (1) Zookeeper Construction

Hadoop Cluster (2) HDFS Construction

Hadoop Cluster (3) Hbase Construction

The download of Hive software, I use the version is hive-1.2.1, now can not be downloaded. You can download the new version as needed.

Http://hive.apache.org/downloads.html

Tar-xzvf apache-hive-1.2.1-bin.tar.gz

To modify the configuration related to the hive-site.xml database, there are mainly the following. In actual production, there are many other parameters that need to be configured, such as lzo compression, kerberos and so on. These parameters are only the most basic parameters to ensure the operation of hive. # # javax.jdo.option.ConnectionURL, modify the value corresponding to the name to the address of the MySQL, such as javax.jdo.option.ConnectionURL jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true##javax.jdo.option.ConnectionDriverName, and modify the value corresponding to the name to the MySQL driver classpath, for example, my modified is: javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver # # javax.jdo.option.ConnectionUserName Change the corresponding value to the login name of MySQL database: javax.jdo.option.ConnectionUserNameroot##javax.jdo.option.ConnectionPassword, the corresponding value to the login password of MySQL database: javax.jdo.option.ConnectionPassword change to your password##hive.metastore.schema.verification, and the corresponding value to false:hive.metastore.schema.verification false.

Create a corresponding directory

Mkdir-p / data1/hiveLogs-security/;chown-R hive:hadoop / data1/hiveLogs-security/mkdir-p / data1/hiveData-security/;chown-R hive:hadoop / data1/hiveData-security/mkdir-p / tmp/hive-security/operation_logs; chown-R hive:hadoop / tmp/hive-security/operation_logs

Create a hdfs directory

Hadoop fs-mkdir / tmphadoop fs-mkdir-p / user/hive/warehousehadoop fs-chmod Grouw / tmphadoop fs-chmod Grouw / user/hive/warehouse

Initialize hive

[hive@aznbhivel01 ~] $schematool-initSchema-dbType mysqlMetastore connection URL: jdbc:mysql://172.16.13.88:3306/hive_beta?useUnicode=true&characterEncoding=UTF-8&createDatabaseIfNotExist=trueMetastore Connection Driver: com.mysql.jdbc.DriverMetastore connection User: envisionStarting metastore schema initialization to 1.2.0Initialization script hive-schema-1.2.0.mysql.sqlInitialization script completedschemaTool completed starts hive for the first time and encounters errors. In fact, many errors are a configuration problem that "hive rookies" are not aware of. And startup and other issues. For veteran drivers, none of this is a problem.

Reason: because the Metastore Server service process for Hive was not started normally.

[hive@aznbhivel01 ~] $hive

Logging initialized using configuration in file:/usr/local/hadoop/apache-hive-1.2.1/conf/hive-log4j.properties

Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

At org.apache.hadoop.hive.ql.session.SessionState.start (SessionState.java:528)

At org.apache.hadoop.hive.cli.CliDriver.run (CliDriver.java:677)

At org.apache.hadoop.hive.cli.CliDriver.main (CliDriver.java:621)

.

Caused by: java.lang.reflect.InvocationTargetException

At sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method)

At sun.reflect.NativeConstructorAccessorImpl.newInstance (NativeConstructorAccessorImpl.java:57)

At sun.reflect.DelegatingConstructorAccessorImpl.newInstance (DelegatingConstructorAccessorImpl.java:45)

At java.lang.reflect.Constructor.newInstance (Constructor.java:526)

At org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance (MetaStoreUtils.java:1521)

... 14 more

Caused by: MetaException (message:Could not connect to meta store using any of the URIs provided. Most recent failure:

-7. Solution: start the Metastore Server service process of Hive, execute the following command, and encounter the next problem

[hive@aznbhivel01 ~] $Starting Hive Metastore Server

Hiveorg.apache.thrift.transport.TTransportException: java.io.IOException: Login failure for hive/aznbhivel01.liang.com@ENVISIONCN.COM from keytab/ etc/security/keytab/hive.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user

At org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server. (HadoopThriftAuthBridge.java:358)

At org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge.createServer (HadoopThriftAuthBridge.java:102)

At org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore (HiveMetaStore.java:5990)

At org.apache.hadoop.hive.metastore.HiveMetaStore.main (HiveMetaStore.java:5909)

At sun.reflect.NativeMethodAccessorImpl.invoke0 (NativeMethod)

At sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:57)

At sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)

At java.lang.reflect.Method.invoke (Method.java:606)

At org.apache.hadoop.util.RunJar.run (RunJar.java:221)

At org.apache.hadoop.util.RunJar.main (RunJar.java:136)

Caused by: java.io.IOException: Login failure for hive/aznbhivel01.liang.com@ENVISIONCN.COM from keytab/ etc/security/keytab/hive.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user```

-8. Keytab can not find, fixed hive.keytab file permissions problem.

Ll / etc/security/keytab/total 100 house r-1 hbase hadoop 18002 Dec 5 17:06 hbase.keytab-r- 1 hdfs hadoop 18002 Dec 5 17:04 hdfs.keytab-r- 1 hive hadoop 18002 Dec 5 17:06 hive.keytab-r- 1 mapred hadoop 18002 Dec 5 17:06 mapred.keytab-r- 1 yarn hadoop 18002 Dec 5 17:06 yarn.keytab

-9. Restart metastore again

$hive-- service metastore & [2] 41285 [1] Killed hive-- service metastore (wd: ~) (wd now: / etc/security) [hive@aznbhivel01 security] $Starting Hive Metastore Server

-10. Then start hiveserver

Hive-- service hiveserver2 &

-11. The startup still failed, and I was confused. The obvious problem is that the server cannot be found in kerberos's KDC. But it has been kinit and has been successful. And as mentioned in the log, the authentication was successful.

Attempts to regenerate the keytab are also invalid. Finally, is it the reason why IP is written in hive-site.xml? Change it to the hostname and solve the problem of "thrift://aznbhivel01.liang.com:9083"

Profile information to be modified ~ hive.metastore.uris thrift://aznbhivel01.liang.com:9083 Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.~log Info ~ 2017-12-07 16 Used by metastore client to connect to remote metastore.~log Information: security.UserGroupInformation (UserGroupInformation.java:login (221))-hadoop login2017-12-07 16 Used by metastore client to connect to remote metastore.~log Information: security.UserGroupInformation (UserGroupInformation.java:login (221))-hadoop login2017-12-07 16 Used by metastore client to connect to remote metastore.~log Information: security.UserGroupInformation (UserGroupInformation.java:commit)-using kerberos user:hive/aznbhivel01.liang.com@LIANG. COM2017-12-07 16 COM2017 16 security.UserGroupInformation 04303 DEBUG [main]: security.UserGroupInformation (UserGroupInformation.java:commit (192))-Using user: "hive/aznbhivel01.liang.com@LIANG.COM" with name hive/aznbhivel01.liang.com@LIANG.COM2017-12-07 16 V 16 V 04303 DEBUG [main]: security.UserGroupInformation (UserGroupInformation.java:commit (202))-User entry: "hive/aznbhivel01.liang.com@LIANG.COM" 2017-12-07 16 V V 04304 INFO [main ]: security.UserGroupInformation (UserGroupInformation.java:loginUserFromKeytab (965))-Login successful for user hive/aznbhivel01.liang.com@LIANG.COM using keytab file / etc/security/keytab/hive.keytab.Client Addresses Null2017-12-07 16 Login successful for user hive/aznbhivel01.liang.com@LIANG.COM using keytab file 16 Login successful for user hive/aznbhivel01.liang.com@LIANG.COM using keytab file 04408 INFO [main]: hive.metastore (HiveMetaStoreClient.java:open (386))-Trying to connect to metastore with URI thrift://172.16.13.88:90832017-12-07 16 15 V 16V 04446 DEBUG [main]: Security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction (1681))-PrivilegedAction as:hive/aznbhivel01.liang.com@LIANG.COM (auth:KERBEROS) from:org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open (TUGIAssumingTransport.java:49) 2017-12-07 16 PrivilegedAction as:hive/aznbhivel01.liang.com@LIANG.COM 16 security.UserGroupInformation 04448 DEBUG [main]: transport.TSaslTransport (TSaslTransport.java:open (261))-opening transport org.apache.thrift.transport.TSaslClientTransport@5bb4d6c02017-12-07 16 ERROR [main]: transport.TSaslTransport (TSaslTransport) .java: open (315)-SASL negotiation failurejavax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7)-UNKNOWN_SERVER)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge (GssKrb5Client.java:212) at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage (TSaslClientTransport.java:94)

-12. Then he encountered a permission error, and there was nothing wrong with his authority. Hdfs can already see the file written by hive, and the permissions should be correct. Continue the analysis.

2017-12-07 20 connection to AZcbetannL02.liang.com/172.16.13.77:9000 from hive/aznbhivel01.liang.com@LIANG.COM 30 DEBUG 59168 DEBUG [IPC Client (1612726596) connection to AZcbetannL02.liang.com/172.16.13.77:9000 from hive/aznbhivel01.liang.com@LIANG.COM]: ipc.Client (Client.java:receiveRpcResponse (1089))-IPC Client (1612726596) connection to AZcbetannL02.liang.com/172.16.13.77:9000 from hive/aznbhivel01.liang.com@LIANG.COM got value # 82017-12-07 20 connection to AZcbetannL02.liang.com/172.16.13.77:9000 from hive/aznbhivel01.liang.com@LIANG.COM 30 ipc.Client [main]: ipc.ProtobufRpcEngine (ProtobufRpcEngine .java: invoke (250)-Call: getFileInfo took 2ms2017-12-07 20 INFO 30 INFO [main]: server.HiveServer2 (HiveServer2.java:stop (305))-Shutting down HiveServer22017-12-07 20 INFO 30 59 169 INFO [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2 (368))-Exception caught when calling stop of HiveServer2 before retrying startjava.lang.NullPointerException at org.apache.hive.service.server.HiveServer2.stop (HiveServer2.java:309) At org.apache.hive.service.server.HiveServer2.startHiveServer2 (HiveServer2.java:366) at org.apache.hive.service.server.HiveServer2.access$700 (HiveServer2.java:74) at org.apache.hive.service.server.HiveServer2 $StartOptionExecutor.execute (HiveServer2.java:588) at org.apache.hive.service.server.HiveServer2.main (HiveServer2.java:461) at sun.reflect.NativeMethodAccessorImpl.invoke0 (NativeMethod) at sun.reflect .NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:606) at org.apache.hadoop.util.RunJar.run (RunJar.java:221) at org.apache.hadoop.util.RunJar.main (RunJar.java:136) 2017-12-07 20 at java.lang.reflect.Method.invoke 30mer 59170 WARN [main]: server.HiveServer2 (HiveServer2.java) : startHiveServer2 (376)-Error starting HiveServer2 on attempt 1 Will retry in 60 secondsjava.lang.RuntimeException: Error applying authorization policy on hive configuration: java.lang.RuntimeException: java.io.IOException: Permission denied at org.apache.hive.service.cli.CLIService.init (CLIService.java:114) at org.apache.hive.service.CompositeService.init (CompositeService.java:59) at org.apache.hive.service.server.HiveServer2.init (HiveServer2.java:100) at org.apache.hive.service.server.HiveServer2 .startHiveServer2 (HiveServer2.java:345) at org.apache.hive.service.server.HiveServer2.access$700 (HiveServer2.java:74) at org.apache.hive.service.server.HiveServer2 $StartOptionExecutor.execute (HiveServer2.java:588) at org.apache.hive.service.server.HiveServer2.main (HiveServer2.java:461) at sun.reflect.NativeMethodAccessorImpl.invoke0 (NativeMethod) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:57) At sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:606) at org.apache.hadoop.util.RunJar.run (RunJar.java:221) at org.apache.hadoop.util.RunJar.main (RunJar.java:136) Caused by: java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: Permission denied at org.apache.hadoop.hive. Ql.session.SessionState.start (SessionState.java:528) at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy (CLIService.java:127) at org.apache.hive.service.cli.CLIService.init (CLIService.java:112)... 12 moreCaused by: java.lang.RuntimeException: java.io.IOException: Permission denied at org.apache.hadoop.hive.ql.session.SessionState.start (SessionState.java:521). .14 moreCaused by: java.io.IOException: Permission denied at java.io.UnixFileSystem.createFileExclusively (Native Method) at java.io.File.createNewFile (File.java:1006) at java.io.File.createTempFile (File.java:1989) at org.apache.hadoop.hive.ql.session.SessionState.createTempFile (SessionState.java:824) at org.apache.hadoop.hive.ql.session.SessionState.start (SessionState.java:519)... 14 more

The strace method of Google to see what the permission problem is.

Strace is your friend if you are on Linux. Try the following from the

Shell in which you are starting hive...

Strace-f-e trace=file service hive-server2 start 2 > & 1 | grep ermission

You should see the file it can't read/write.

The above problem finally found that the permissions of the / tmp/hive-security path are incorrect. After modification, this problem has passed.

Hive.exec.local.scratchdir / tmp/hive-security

-13. Next question, continue:

2017-12-07 20 DEBUG 58 Client.java:run 56749 DEBUG [IPC Parameter Sending Thread # 0]: ipc.Client (Client.java:run (1032))-IPC Client (1612726596) connection to AZcbetannL02.liang.com/172.16.13.77:9000 from hive/aznbhivel01.liang.com@LIANG.COM sending # 82017-12-07 20 DEBUG 58 DEBUG [IPC Client (1612726596) connection to AZcbetannL02.liang.com/172.16.13.77:9000 from hive/aznbhivel01.liang.com@LIANG.COM]: ipc .client (Client.java:receiveRpcResponse (1089))-IPC Client (1612726596) connection to AZcbetannL02.liang.com/172.16.13.77:9000 from hive/aznbhivel01.liang.com@LIANG.COM got value # 82017-12-07 20 DEBUG 58 DEBUG [main]: ipc.ProtobufRpcEngine (ProtobufRpcEngine.java:invoke (1089))-Call: getFileInfo took 2ms2017-12-07 20 DEBUG 58 ERROR [main]: session.SessionState (SessionState.java:setupAuth (749)-Error setting up authorization: Java.lang.ClassNotFoundException: org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizerFactoryorg.apache.hadoop.hive.ql.metadata.HiveException: java.lang.ClassNotFoundException: org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizerFactory at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager (HiveUtils.java:391)

Google query keyword org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizerFactory

Find the article

Https://www.cnblogs.com/wyl9527/p/6835620.html

Restart the hive service after executing the startup command.

After the installation is complete:

You will see several more configuration files.

Modify the hiveserver2-site.xml file

Hive.security.authorization.enabled true hive.security.authorization.manager org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizerFactory hive.security.authenticator.manager org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator hive.conf.restricted.list hive.security.authorization.enabled,hive.security.authorization.manager,hive.security.authenticator.manager

Ranger security certification is not currently used and decided to cancel it. How do I cancel it?

Simply delete the hiveserver2-site.xml file. Took another step forward, and hiveserver2 started successfully. Hive goes in and encounters the next error.

-14. You can start hive normally, and you can also enter the query through the hive command. However, you can see that the command execution is OK, but the query results cannot be returned normally.

[hive@aznbhivel01 hive] $hiveLogging initialized using configuration in file:/usr/local/hadoop/apache-hive-1.2.1/conf/hive-log4j.propertieshive > show databases;OKFailed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring objectTime taken: 0.867 seconds

Baidu solution

Http://blog.csdn.net/wodedipang_/article/details/72720257

But my configuration is that there is no case mentioned in this article. It is suspected that it is the permission of this folder and so on.

Hive.exec.local.scratchdir / tmp/hive-security Local scratch space for Hive jobs

Finally, there is the following error in the log hive.log, indicating that the jar package is missing

Caused by: java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found. At org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses (CompressionCodecFactory.java:139) at org.apache.hadoop.io.compress.CompressionCodecFactory. (CompressionCodecFactory.java:179) at org.apache.hadoop.mapred.TextInputFormat.configure (TextInputFormat.java:45)... 26 moreCaused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not found at org.apache.hadoop.conf.Configuration.getClassByName (Configuration.java:2101) at org.apache.hadoop. Io.compress.CompressionCodecFactory.getCodecClasses (CompressionCodecFactory.java:132)... 28 more

-15. There are settings in core-site.xml of hadoop and compression mode of setting lzo.LzoCodec, so you need to support the corresponding jar package before you can execute Mapreducer normally.

Io.compression.codecsorg.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,org.apache.hadoop.io.compress.SnappyCodec,com.hadoop.compression.lzo.LzopCodec io.compression.codec.lzo.class com.hadoop.compression.lzo.LzoCodec hadoop.http.staticuser.user hadoop lzo.text.input.format.ignore.nonlzo false

Copy the required package from other normal environments and solve it.

Note that the lzo jar package needs this jar package not only on the hive server, but on all yarn/MapReduce machines, otherwise there will be problems when lzo compression is involved in the call to mapreduce, not just hive-initiated tasks.

# pwd/usr/local/hadoop/hadoop-release/share/hadoop/common# ls | grep lzohadoop-lzo-0.4.21-SNAPSHOT.jar

At this point, the hive installation is complete.

Climb over a pit and feel the output of the hive query:

Hive > select count (*) from test.testxx;Query ID = hive_20171224121853_d96ed531-7e09-438d-b383-bc2a715753fcTotal jobs = 1Launching Job 1 out of 1Number of reduce tasks determined at compile time: 1Starting Job = job_1513915190261_0008, Tracking URL = https://aznbrmnl02.liang.com:8089/proxy/application_1513915190261_0008/Kill Command = / usr/local/hadoop/hadoop-2.7.1/bin/hadoop job-kill job_1513915190261_0008Hadoop job information for Stage-1: number of mappers: 3 Number of reducers: 12017-12-24 12 reduce 1913 32 Stage-1 map = 0%, reduce = 0% 2017-12-24 12 Stage-1 map 1915 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.85 sec2017-12-24 12 reduce 1930 0064 Stage-1 map = 100%, reduce = 100% Cumulative CPU 5.43 secMapReduce Total cumulative CPU time: 5 seconds 430 msecEnded Job = job_1513915190261_0008MapReduce Jobs Launched: Stage-Stage-1: Map: 3 Reduce: 1 Cumulative CPU: 5.43 sec HDFS Read: 32145 HDFS Write: 5 SUCCESSTotal MapReduce CPU Time Spent: 5 seconds 430 msecOK2526Time taken: 38.834 seconds, Fetched: 1 row (s)

Points to pay attention to

-1. The character set of mysql is latin1, which is appropriate when installing hive, but it will not be displayed properly when you use it later, especially when you save files at noon. Therefore, it is recommended that after installing hive, modify the character set to UTF8

Mysql > SHOW VARIABLES LIKE 'character%' +-- +-- + | Variable_name | Value | +-- -+ | character_set_client | utf8 | | character_set_connection | utf8 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | utf8 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | / usr/share/mysql/charsets/ | +-- +-- + 8 rows in set (0.00 sec) |

-2 modify character set

-# vi / etc/my.cnf [mysqld] datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sockdefault-character-set = utf8character_set_server = utf8-# Disabling symbolic-links is recommended to prevent assorted security riskssymbolic-links=0log-error=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.pid

-3 after modification

Mysql > SHOW VARIABLES LIKE 'character%' +-- +-- + | Variable_name | Value | +-- -+ | character_set_client | utf8 | | character_set_connection | utf8 | | character_set_database | utf8 | | character_set_filesystem | binary | | character_set_results | utf8 | | character_set_server | Utf8 | | character_set_system | utf8 | | character_sets_dir | / usr/share/mysql/charsets/ | +-- +-- + 8 rows in set (0.00 sec)

How to connect to the hive

A. hive is directly connected. If there is a kerberos, please note the kinit authentication first.

Su-hivehive

B. beeline connection

Beeline-u "jdbc:hive2://hive-hostname:10000/default;principal=hive/_HOST@LIANG.COM"

If it is the hiveserver2 HA architecture, the connection method is as follows:

Beeline-u "jdbc:hive2://zookeeper1-ip:2181,zookeeper2-ip:2181,zookeeper3-ip:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2_zk;principal=hive/_HOST@LIANG.COM"

If there is no security authentication such as kerberos, beeline connects to hive and needs to indicate the user who logged in.

Beeline-u "jdbc:hive2://127.0.0.1:10000/default;"-n hive

In addition, will Hive use mapreducer during its execution?

Hive 0.10.0 for efficiency considerations, a simple query, that is, just select, without count,sum,group by, do not go to map/reduce, directly read hdfs files for filter filtering. The advantage of this is that there are no new mr tasks, and the execution efficiency will be improved a lot, but the downside is that the user interface is not friendly, sometimes it takes a long time to wait for a large amount of data, but there is no return.

It's easy to change this. There is a configuration parameter in hive-site.xml called

Hive.fetch.task.conversion

Set this parameter to more, simple query will not go to map/reduce, set to minimal, any simple select will go to map/reduce

-Update 2018.2.11-

If you reinitialize the mysql library of hive, you need to log in to the original mysql,drop library first, otherwise you will encounter the following error

-# su-hive

[hive@aznbhivel01 ~] $schematool-initSchema-dbType mysql

Metastore connection URL: jdbc:mysql://10.24.101.88:3306/hive_beta?useUnicode=true&characterEncoding=UTF-8&createDatabaseIfNotExist=true

Metastore Connection Driver: com.mysql.jdbc.Driver

Metastore connection User: envision

Starting metastore schema initialization to 1.2.0

Initialization script hive-schema-1.2.0.mysql.sql

Error: Specified key was too long; max key length is 3072 bytes (state=42000,code=1071)

Org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent!!

SchemaTool failed

After deleting the original hive library, initialize it again and OK it directly.

[hive@aznbhivel01 ~] $schematool-initSchema-dbType mysql

Metastore connection URL: jdbc:mysql://10.24.101.88:3306/hive_beta?useUnicode=true&characterEncoding=UTF-8&createDatabaseIfNotExist=true

Metastore Connection Driver: com.mysql.jdbc.Driver

Metastore connection User: envision

Starting metastore schema initialization to 1.2.0

Initialization script hive-schema-1.2.0.mysql.sql

Initialization script completed

SchemaTool completed

Startup and shutdown of Hive:

1. Start metastore

Nohup / usr/local/hadoop/hive-release/bin/hive-- service metastore-- hiveconf hive.log4j.file=/usr/local/hadoop/hive-release/conf/meta-log4j.properties > / data1/hiveLogs-security/metastore.log 2 > & 1 &

two。 Start hiveserver2

Nohup / usr/local/hadoop/hive-release/bin/hive-- service hiveserver2 > / data1/hiveLogs-security/hiveserver2.log 2 > & 1 &

3. Close HiveServer2

Kill-9ps ax-- cols 2000 | grep java | grep HiveServer2 | grep-v'ps ax' | awk'{print $1;}'``

4. Close metastore

Kill-9ps ax-- cols 2000 | grep java | grep MetaStore | grep-v'ps ax' | awk'{print $1;}'``

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report