Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

0025-CENTOS6.5 installation CDH5.12.1 (2)

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Warm Tip: to see the high-definition no-code picture, please open it with your mobile phone and click the picture to enlarge.

5. Rapid component Services Verification

5.1HDFS authentication (mkdir+put+cat+get)

Mkdir operation:

[root@ip-172-31-6-148~] # hadoop fs-mkdir-p / fayson/test

[root@ip-172-31-6-148C] # hadoop fs-ls /

Found 3 items

Drwxr-xr-x-root supergroup 0 2017-09-0506 purl 16 / fayson

Drwxrwxrwt-hdfs supergroup 0 2017-09-0504 purl 24 / tmp

Drwxr-xr-x-hdfs supergroup 0 2017-09-0504 purl 24 / user

[root@ip-172-31-6-148a] #

Put operation:

[root@ip-172-31-6-148~] # vim a.txt

1,test

2,fayson

3.zhangsan

[root@ip-172-31-6-148C] # hadoop fs-put a.txt / fayson/test

[root@ip-172-31-6-148C] # hadoop fs-ls / fayson/test

Found 1 items

-rw-r--r-- 3 root supergroup 27 2017-09-05 06:20 / fayson/test/a.txt

[root@ip-172-31-6-148a] #

Cat operation:

[root@ip-172-31-6-148C] # hadoop fs-cat / fayson/test/a.txt

1,test

2,fayson

3.zhangsan

[root@ip-172-31-6-148a] #

Get operation:

[root@ip-172-31-6-148~] # rm-rf a.txt

[root@ip-172-31-6-148C] # hadoop fs-get / fayson/test/a.txt

[root@ip-172-31-6-148c] # cat a.txt

1,test

2,fayson

3.zhangsan

[root@ip-172-31-6-148a] #

5.2Hive verification

Use the hive command line operation

[root@ip-172-31-6-148~] # hive

...

Hive > create external table test_table (

S1 string

S2 string

>) row formatdelimited fields terminated by','

> stored as textfile location'/ fayson/test'

OK

Time taken: 1.933 seconds

Hive > select * from test_table

OK

1 test

2 fayson

3 zhangsan

Time taken: 0.44 seconds, Fetched: 3row (s)

Hive > insert into test_table values ("4", "lisi")

...

OK

Time taken: 18.815 seconds

Hive > select * from test_table

OK

4 lisi

1 test

2 fayson

3 zhangsan

Time taken: 0.079 seconds, Fetched: 4row (s)

Hive >

Hive MapReduce operation

Hive > select count (*) from test_table

Query ID = root_20170905064545_100f033c-49b9-488b-9920-648a2e1c7285

...

OK

four

Time taken: 26.428 seconds, Fetched: 1 row (s)

Hive >

5.3MapReduce verification

[root@ip-172-31-6-148hadoop-mapreduce] # pwd

/ opt/cloudera/parcels/CDH/lib/hadoop-mapreduce

[root@ip-172-31-6-148hadoop-mapreduce] # hadoop jar hadoop-mapreduce-examples.jar pi 5 5

Number of Maps = 5

Samples per Map = 5

Wrote input for Map # 0

Wrote input for Map # 1

Wrote input for Map # 2

Wrote input for Map # 3

Wrote input for Map # 4

Starting Job

06:48:53 on 17-09-05 INFO client.RMProxy: Connecting to ResourceManager atip-172-31-6-148.fayson.com/172.31.6.148:8032

17-09-05 06:48:53 INFO input.FileInputFormat: Total input paths to process: 5

17-09-05 06:48:53 INFO mapreduce.JobSubmitter: number of splits:5

17-09-05 06:48:54 INFO mapreduce.JobSubmitter: Submitting tokens for job:job_1504585342848_0003

17-09-05 06:48:54 INFO impl.YarnClientImpl: Submitted applicationapplication_1504585342848_0003

06:48:54 on 17-09-05 INFO mapreduce.Job: The url to track the job: http://ip-172-31-6-148.fayson.com:8088/proxy/application\_1504585342848\_0003/

17-09-05 06:48:54 INFO mapreduce.Job: Running job: job_1504585342848_0003

17-09-05 06:49:01 INFO mapreduce.Job: Job job_1504585342848_0003 running in ubermode: false

06:49:01 on 17-09-05 INFO mapreduce.Job: map0% reduce 0

06:49:07 on 17-09-05 INFO mapreduce.Job: map20% reduce 0

06:49:08 on 17-09-05 INFO mapreduce.Job: map60% reduce 0

06:49:09 on 17-09-05 INFO mapreduce.Job: map100% reduce 0

06:49:15 on 17-09-05 INFO mapreduce.Job: map100% reduce 100%

17-09-05 06:49:16 INFO mapreduce.Job: Job job_1504585342848_0003 completedsuccessfully

17-09-05 06:49:16 INFO mapreduce.Job: Counters: 49

File System Counters

FILE: Numberof bytes read=64

FILE: Numberof bytes written=875624

FILE: Numberof read operations=0

FILE: Numberof large read operations=0

FILE: Number of writeoperations=0

HDFS: Numberof bytes read=1400

HDFS: Numberof bytes written=215

HDFS: Numberof read operations=23

HDFS: Numberof large read operations=0

HDFS: Number of writeoperations=3

Job Counters

Launched map tasks=5

Launched reduce tasks=1

Data-local map tasks=5

Total time spent by all maps in occupiedslots (ms) = 27513

_ Total_ * * time** spentby all reduces * * in** occupied slots (ms) = _ 3803 _ Total_ * * time** spentby all map tasks (ms) = _ 27513 _ Total_ * * time** spentby all reduce tasks (ms) = _ 3803 _ Total_ vcore-milliseconds taken by all map tasks=27513

Total vcore-millisecondstaken by all reduce tasks=3803

Total megabyte-millisecondstaken by all map tasks=28173312

Total megabyte-millisecondstaken by all reduce tasks=3894272

Map-Reduce Framework

Map inputrecords=5

Map outputrecords=10

Map outputbytes=90

Map outputmaterialized bytes=167

Input splitbytes=810

Combine input records=0

Combine output records=0

Reduce input groups=2

Reduce shuffle bytes=167

Reduce input records=10

Reduce output records=0

Spilled Records=20

Shuffled Maps = 5

Failed Shuffles=0

Merged Map outputs=5

GC timeelapsed (ms) = 273

_ CPU_ * * time** spent (ms) = _ 4870 _ _ Physical_ memory (bytes) snapshot=2424078336

Virtual memory (bytes) snapshot=9435451392

Total committedheap usage (bytes) = 2822766592

_ Shuffle_ Errors BAD\ _ ID=0

CONNECTION=0

IO_ERROR=0

WRONG_LENGTH=0

WRONG_MAP=0

WRONG_REDUCE=0

File Input FormatCounters

Bytes Read=590

File Output FormatCounters

Bytes Written=97

Job Finished in 23.453 seconds

Estimated value of Pi is 3.68000000000000000000

[root@ip-172-31-6-148hadoop-mapreduce] #

5.4Spark verification

[root@ip-172-31-6-148~] # spark-shell

Setting default log level to "WARN".

To adjust logging level use sc.setLogLevel (newLevel).

Welcome to

_

/ _ _ / _ / / _ _

\ _ /\ _ /\ _\ /\ _ `_ /\ _\ _ /'\ _ /

/ _ _ /. _ _ / _, _ / _\ version 1.6.0

/ _ /

...

Spark context available as sc (master = yarn-client, app id = application_1504585342848_0004).

17-09-05 06:51:59 WARN metastore.ObjectStore: Version information not found in metastore.hive.metastore.schema.verification is not enabled so recording the schemaversion 1.1.0-cdh6.12.1

17-09-05 06:51:59 WARN metastore.ObjectStore: Failed to get database default,returning NoSuchObjectException

SQL context available as sqlContext.

Scala > val textFile=sc.textFile ("hdfs://ip-172-31-6-148.fayson.com:8020/fayson/test/a.txt")

TextFile: org.apache.spark.rdd.RDDString = hdfs://ip-172-31-6-148.fayson.com:8020/fayson/test/a.txt MapPartitionsRDD1 at textFileat: 27

Scala > textFile.count ()

Res0: Long = 3

Scala >

Drunken whips are famous horses, and teenagers are so pompous! Lingnan Huan Xisha, under the vomiting liquor store! The best friend refuses to let go, the flower of data play!

Warm Tip: to see the high-definition no-code picture, please open it with your mobile phone and click the picture to enlarge.

Welcome to follow Hadoop practice, the first time, share more Hadoop practical information, like please follow and share.

Original article, welcome to reprint, reprint please indicate: reproduced from the official account of Wechat Hadoop

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report