In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Editor to share with you what are the basic execution statements in hive, I hope you have something to gain after reading this article, let's discuss it together!
Hive simple concept
Hive is a data warehouse processing tool based on Hadoop. At present, it only supports simple SQL query and modification functions similar to the traditional relational database. It can directly transform SQL into MapReduce programs. Developers do not have to learn to write MR programs, which improves the development efficiency.
Example: based on the hive environment stored in mysql, mysql metadata (hive related tables, field attributes of tables, etc.) is stored in mysql database, and mysql data is stored in hdfs default / user/hive/warehouse/hive.db.
Ddl statement
Mysql as a metadata storage database (hive) structure directory
Create a tabl
Hive > create table test (id int, name string)
The concept of partition is introduced, because select in hive generally scans the entire table, which wastes a lot of time, so the concept of partition is introduced.
Hive > create table test2 (id int, name string) partitioned by (ds string)
Browse the table
Hive > show tables
Introduce regular expressions similar to like
Hive > show tables'. * t'
View data structures
Hive > DESCRIBE test; or desc test
Modify or delete a table
Hive > alter table test rename to test3
Hive > alter table add columns (new_column type comment 'comments')
Hive > drop table test;DML operation statement
1. Pour in the data
LOAD DATA LOCAL INPATH'/ home/hadoop/test.txt' OVERWRITE INTO TABLE test
Local means local execution. If the default is to get the file on hdfs, overwrite means to import data overwrite. If removed, it means append.
2. Execute the query
Select * from test2 where test2.ds='2014-08-26'
3. It is worth noting that select count (*) from test is different from our usual query operation of relational database records. It executes a mr.
Hive > select count (*) from test2
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
Set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
Set hive.exec.reducers.max=
In order to set a constant number of reducers:
Set mapred.reduce.tasks=
Starting Job = job_1411720827309_0004, Tracking URL = http://master:8031/proxy/application_1411720827309_0004/
Kill Command = / usr/local/cloud/hadoop/bin/hadoop job-kill job_1411720827309_0004
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
Stage-1 map = 0%, reduce = 0%
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93sec
Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.3 sec
Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.3 sec
MapReduce Total cumulative CPU time: 2 seconds 300 msec
Ended Job = job_1411720827309_0004
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 Cumulative CPU: 2.3 sec HDFS Read: 245 HDFS Write: 2 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 300 msec
OK
three
Time taken: 27.508 seconds, Fetched: 1 row (s)
After reading this article, I believe you have a certain understanding of "what are the basic execution statements in hive". If you want to know more about it, you are welcome to follow the industry information channel. Thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.