Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Shell operation of hdfs

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Shell operation of Hdfs

Hadoop fs-operation commands-parameters

-ls # displays directory information

-- > hadoop fs-ls / View root directory information

-put # is equivalent to copyFromLocal # copying files from the local file system to the hdfs path

-get # is equivalent to copyToLocal, which means downloading files from hdfs to local # copying from hdfs to local

-getmerge # merge and download multiple files

-- > for example, there are multiple files under the directory / aaa/ of hdfs: log.1, log.2,log.3,...

Hadoop fs-getmerge / aaa/log. . / log.sum

-moveFromLocal # move from local to hdfs

-moveToLocal # move from hdfs to local

-cp # copy one path of hdfs from another path of hdfs

-- > hadoop fs-cp / aaa/jdk.tar.gz / bbb/jdk.tar.gz.2

-mv # move files in the hdfs directory

-mkdir # create a directory on hdfs where-p create a directory and then create a directory-mkdir can only create one directory

-- > hadoop fs-mkdir-p / aaa/bbb/cc/dd

-rm # Delete a file or folder-r Delete a subset directory under the aaa directory bbb no-r only delete aaa

-- > hadoop fs-rm-r / aaa/bbb/

-rm-skipTrash permanent deletion

[root@hdfs-master-84-20] # sudo-u hdfs hadoop fs- rm-skipTrash / data/vargoFile/

-rmdir # Delete empty directory

-cat-display file contents

-- > hadoop fs-cat / hello.txt

-tail-displays the contents of the file (same as cat)

[root@NewCDH-0--141 ~] # hadoop fs-tail / mjh/shiyanshuju/shiyanshuju.txt001,zhangsan,5678,zhangsan@email.com,123456789,10000.00br/ > 001zhangsanreix5678zhangsanqiemail.commeme123456789pr 10000.00

003 wangwu.7890 wangwu.163.com.com.234567145pr 3456.00

[root@NewCDH-0--141 ~] # hadoop fs-cat / mjh/shiyanshuju/shiyanshuju.txt001,zhangsan,5678,zhangsan@email.com,123456789,10000.00br/ > 001zhangsanreix5678zhangsanqiemail.commeme123456789pr 10000.00

003 wangwu.7890 wangwu.163.com.com.234567145pr 3456.00 [root@NewCDH-0--141 ~] #

-chgrp-chmod-chown

The above three are the same as in linux

-- > hadoop fs-chmod / hello.txt

-count # counts the number of file nodes in a specified directory

-- > hadoop fs-count / aaa/

-createSnapshot-deleteSnapshot-renameSnapshot

The above three are used to manipulate snapshots of hdfs file system directory information

-> hadoop fs-createSnapshot /

-df # Statistics the free space information of the file system

-du

-- > hadoop fs-df-h / # to add-h to linux-- > df-Th means to view the free space of the whole system

-- > hadoop fs-du-s-h / aaa/ # to see how much space is used in the file under the aaa file (that is, to view the file size)

-- > du-s-h mjh.data # View mjh.data file size

-du statistics directory file size;-du-s summary directory file size (in bytes);-du-h, display each file under the directory

Size of the file. -du-s-h / user/hive/warehouse/table_test, which summarizes the storage space occupied by the table, in display units.

[root@NewCDH-0--141 ~] # sudo-u hdfs hadoop fs-du /

46126 1610713413 / hbase

186 522 / mjh

173 519 / newdata

651 1953 / newdata.har

2119478 4314688 / tmp

589588200 1765140911 / user

[root@NewCDH-0--141] # sudo-u hdfs hadoop fs-du-s /

591754814 3380172006 /

[root@NewCDH-0--141] # sudo-u hdfs hadoop fs-du-s-h /

564.3 M 3.1 G /

[root@NewCDH-0--141] # sudo-u hdfs hadoop fs-du-h /

45.0 K 1.5 G / hbase

186 522 / mjh

173 519 / newdata

651 1.9 K / newdata.har

2.0 M 4.1 M / tmp

562.3 M 1.6 G / user

The first column indicates the total file size in this directory

The second column indicates that the total storage size of all files under this directory on the cluster is related to your number of copies. My number of copies is 3.

So

The second column is three times as large as the first column (second column content = file size * number of copies)

The third column indicates the directory of your query.

[root@NewCDH-0--141] # sudo-u hdfs hadoop fs-count-Q /

9223372036854775807 9223372036854772688 none inf

2192 927 591754814 /

The logical space of the root directory is 591754814B

1G=1024MB=1024X1024KB=1024X1024X1024B, that is, 1G=1024x1024x1024 bytes.

591754814B=0.55111G

Fs-count-Q outputs eight columns as follows:

The quota remaining directory file directory logical path of the remaining life physical space of the namespace

Quota (limit the number of files) name space quota (limit space usage)

The total space size of the physical empty statistics series based on

You can see that through hadoop fs-count-Q, you can see the more detailed space and qutoa occupancy of a directory, including physical space, logical space, number of files, number of directories, qutoa surplus and other references: understand the size of hadoop fsck, fs-dus,-count-Q output: http://www.opstool.com/article/255

-help # output this command parameter manual

-setrep # sets the number of copies of files in hdfs

-- > hadoop fs-setrep 3 / aaa/jdk.tar.gz

-stat # displays the meta-information of a file or folder. TDH shows the time of creation.

-tail # displays information about the last 1KB of a file

-text # prints the contents of a file in character form

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report