Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to import and export Hive data

2025-04-08 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains "how to import and export Hive data". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to import and export Hive data.

Import from the file system

Data source storage path: / root/data

Hive > load data local inpath "/ root/data" overwrite intotable T1; Loading data totable default.t1Table default.t1 stats: [numFiles=1, numRows=0, totalSize=30,rawDataSize=0] OKTime taken: 1.712 secondshive > select * from T1 / Okzhangsan 25lisi 27

Wangwu 24

Import from HDFS

Hdfs data storage location

[root@crxy177 ~] # hadoop dfs-ls /

-rw-r--r-- 1 root supergroup 30 2015-05-18 10:39 / data

Hive > load data inpath "/ data" overwrite into table T1; Loading data to table default.t1Moved:'hdfs://192.168.1.177:9000/user/hive/warehouse/t1/data' to trash at:hdfs://192.168.1.177:9000/user/root/.Trash/CurrentTable default.t1 stats: [numFiles=1,numRows=0, totalSize=30, rawDataSize=0] OKTime taken: 1.551 seconds III. Import through query

Create a table

Hive > create table T2 like T1

OK

Time taken: 0.246 seconds

Import data

Hive > insert overwrite table T2 select * form T1

FAILED: NullPointerException null

Hive > insert overwrite table T2 select * from T1

Query ID = root_20150518104747_7922f9d4-2e15-434a-8b9f-076393d73470

Total jobs = 3

Launching Job 1 out of 3

Number of reduce tasks is set to 0 since there's no reduce operator

Starting Job = job_1431916152610_0001, Tracking URL = http://crxy177:8088/proxy/application_1431916152610_0001/

Kill Command = / usr/local/hadoop-2.6.0/bin/hadoop job-kill job_1431916152610_0001

Interrupting... Be patient, this might take some time.

Press Ctrl+C again to kill JVM

Killing job with: job_1431916152610_0001

Hadoop job information for Stage-1: number of mappers: 0; number ofreducers: 0

2015-05-18 10 47 40679 Stage-1 map = 0%, reduce = 0%

Ended Job = job_1431916152610_0001 with errors

Error during job, obtaining debugging information...

FAILED: Execution Error, return code 2 fromorg.apache.hadoop.hive.ql.exec.mr.MapRedTask

MapReduce Jobs Launched:

Stage-Stage-1: HDFS Read: 0HDFS Write: 0 FAIL

Total MapReduce CPU Time Spent: 0 msec

4. Import multiple tables at the same time

Create a table of t _ 3 and t _ 4

Hive > createtable T3 like T1

OK

Time taken:1.235 seconds

Hive > createtable T4 like T1

OK

Time taken:0.211 seconds

Multi-table data import

Hive > FROM T1

> INSERT OVERWRITE TABLE T2 SELECT * WHERE 1

> INSERT OVERWRITE TABLE T3 SELECT * WHERE 1

> INSERT OVERWRITE TABLE T4 SELECT * WHERE 1

Query ID = root_20150518105252_9101659d-0990-4626-a4f7-8bad768af48b

Total jobs = 7

Launching Job 1out of 7

Number of reducetasks is set to 0 since there's no reduce operator

Starting Job = job_1431916152610_0002, Tracking URL = http://crxy177:8088/proxy/application_1431916152610_0002/

Kill Command = / usr/local/hadoop-2.6.0/bin/hadoop job-kill job_1431916152610_0002

Hadoop jobinformation for Stage-3: number of mappers: 1; number of reducers: 0

2015-05-1810 Stage-3 map = 0%, reduce = 0%

2015-05-1810 Stage-3 map 53 Cumulative CPU 02273 sec = 100%, Cumulative CPU 0%

MapReduce Totalcumulative CPU time: 1 seconds 410 msec

Ended Job = job_1431916152610_0002

Stage-6 isselected by condition resolver.

Stage-5 isfiltered out by condition resolver.

Stage-7 isfiltered out by condition resolver.

Stage-12 isselected by condition resolver.

Stage-11 isfiltered out by condition resolver.

Stage-13 isfiltered out by condition resolver.

Stage-18 isselected by condition resolver.

Stage-17 isfiltered out by condition resolver.

Stage-19 isfiltered out by condition resolver.

Moving data to:hdfs://192.168.1.177:9000/tmp/hive/root/88e075ab-e7da-497d-a56b-74f652f3eae6/hive_2015-05-18 "10-52-30" 865 "4936011539493382740-1/-ext-10000

Moving data to:hdfs://192.168.1.177:9000/tmp/hive/root/88e075ab-e7da-497d-a56b-74f652f3eae6/hive_2015-05-18 "10-52-30" 865 "4936011539493382740-1/-ext-10002

Moving data to:hdfs://192.168.1.177:9000/tmp/hive/root/88e075ab-e7da-497d-a56b-74f652f3eae6/hive_2015-05-18 "10-52-30" 865 "4936011539493382740-1/-ext-10004

Loading data totable default.t2

Loading data totable default.t3

Loading data totable default.t4

Table default.t2stats: [numFiles=1, numRows=0, totalSize=30, rawDataSize=0]

Table default.t3stats: [numFiles=1, numRows=0, totalSize=30, rawDataSize=0]

Table default.t4stats: [numFiles=1, numRows=0, totalSize=30, rawDataSize=0]

MapReduce JobsLaunched:

Stage-Stage-3:Map: 1 Cumulative CPU: 1.41 sec HDFS Read: 237 HDFS Write: 288 SUCCESS

Total MapReduceCPU Time Spent: 1 seconds 410 msec

OK

Time taken:34.245 seconds

At this point, I believe you have a deeper understanding of "how to import and export Hive data". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report