Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use the fs command in HDFS

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces how to use the fs command in HDFS, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let the editor take you to understand.

Version: Hadoop 2.7.4

-- View hadoop fs help information

[root@hadp-master sbin] # hadoop fs

Usage: hadoop fs [generic options]

[- appendToFile...]

[- cat [- ignoreCrc]...]

[- checksum...]

[- chgrp [- R] GROUP PATH...]

[- chmod [- R] PATH...]

[- chown [- R] [OWNER] [: [GROUP]] PATH...]

[- copyFromLocal [- f] [- p] [- l].]

[- copyToLocal [- p] [- ignoreCrc] [- crc].]

[- count [- Q] [- h].]

[- cp [- f] [- p |-p [topax]].]

[- createSnapshot []]

[- deleteSnapshot]

[- df [- h] [...]]

[- du [- s] [- h].]

[- expunge]

[- find...]

[- get [- p] [- ignoreCrc] [- crc].]

[- getfacl [- R]]

[- getfattr [- R] {- n name |-d} [- e en]]

[- getmerge [- nl]]

[- help [cmd...]]

[- ls [- d] [- h] [- R] [...]]

[- mkdir [- p].]

[- moveFromLocal...]

[- moveToLocal]

[- mv...]

[- put [- f] [- p] [- l].]

[- renameSnapshot]

[- rm [- f] [- r |-R] [- skipTrash].]

[- rmdir [--ignore-fail-on-non-empty].]

[- setfacl [- R] [{- b |-k} {- m |-x}] | [--set]]

[- setfattr {- n name [- v value] |-x name}]

[- setrep [- R] [- w].]

[- stat [format]...]

[- tail [- f]]

[- test-[defsz]]

[- text [- ignoreCrc]...]

[- touchz...]

[- truncate [- w].]

[- usage [cmd...]]

Note:

The following instructions are all operated in the Linux command line window interface.

[] indicates an optional parameter and a required parameter.

Before you start using the command, you must start Hadoop

(1)-appendToFile

Usage: hadoop fs-appendToFile.

Purpose: to add one or more files to the HDFS system.

Example:

Hadoop fs-appendToFile localfile / user/hadoop/hadoopfile

Hadoop fs-appendToFile localfile1 localfile2 / user/hadoop/hadoopfile

Hadoop fs-appendToFile localfile hdfs://nn.example.com/hadoop/hadoopfile

Hadoop fs-appendToFile-hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.

(2)-cat

Usage: hadoop fs-cat URI [URI...]

Purpose: view the contents of the file (you can view the contents locally and on the HDFS).

Example:

Hadoop fs-cat hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2

Hadoop fs-cat file:///file3 / user/hadoop/file4

(3)-checksum

Usage: hadoop fs-checksum URI

Function: view check code information. (example shows MD5)

Example:

Hadoop fs-checksum hdfs://nn1.example.com/file1

Hadoop fs-checksum file:///etc/hosts

(4)-chgrp

Usage: hadoop fs-chgrp [- R] GROUP URI [URI...]

Purpose: change the group to which the file belongs. (Change group association of files.)

Using-R will make the changes recursive under the directory structure.

(5)-chmod

Function: change file access rights.

Usage: hadoop fs-chmod [- R] URI [URI...]

You can refer to the usage of chmod in the file system under Linux, which is basically similar.

(6)-chown

Function: hadoop fs-chown [- R] [OWNER] [: [GROUP]] URI [URI]

Usage: change the owner of the file. Using-R will make the changes recursive under the directory structure. The user of the command must be a superuser.

(7)-copyFromLocal

Usage: hadoop fs-copyFromLocal URI

Purpose: similar to the put command, unlike put, the source address of the copy must be the local file address.

The-f parameter is overwritten when the copied target file exists.

Example:

[root@two1 fanrui] # hadoop fs-copyFromLocal testFlatMap.txt / 1.txt

CopyFromLocal: `/ 1.txtreply: File exists

At this point, add the-f parameter. It can be overwritten.

[root@two1 fanrui] # hadoop fs-copyFromLocal-f testFlatMap.txt / 1.txt

(8)-copyToLocal

Usage: hadoop fs-copyToLocal [- ignorecrc] [- crc] URI

Function: similar to the get instruction. Unlike get, the destination address of the copy must be the local file address.

(9)-count

Function: calculate the number of directories, files and bytes under paths.

Usage: hadoop fs-count [- Q] [- h] [- v]

Hadoop fs-count hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2

Hadoop fs-count-Q hdfs://nn1.example.com/file1

Hadoop fs-count-Q-h hdfs://nn1.example.com/file1

Hdfs dfs-count-Q-h-v hdfs://nn1.example.com/file1

(10)-cp

Usage: hadoop fs-cp [- f] [- p |-p [topax]] URI [URI...]

Function: copy, the copy operation performed in the HDFS file system.

-f parameter option: overwrite when the file exists.

-p parameter options: copy permissions, group, timestamp, ACL, XAttr, etc. The following is a description of the official website.

The-p option will preserve file attributes [topx] (timestamps, ownership, permission, ACL, XAttr). If-p is specified with no arg, then preserves timestamps, ownership, permission. If-pa is specified, then preserves permission also because ACL is a super-set of permission. Determination of whether raw namespace extended attributes are preserved is independent of the-p flag.

Example:

[root@two1 fanrui] # hadoop fs-cp-p / tmp/fan / tmp/fan1

(11)-df

Usage: hadoop fs-df [- h] URI [URI...]

Function: displays the remaining space.

Example:

[root@two1 fanrui] # hadoop fs-df /

Filesystem Size Used Available Use%

Hdfs://localhost:9000 37626667008 311296 24792702976 0%

(12)-dus

Purpose: displays a summary of the file length. This method has been dropped, which is equivalent to the-du-s method. See (11)

(13)-expunge

Purpose: permanently delete files in a checkpoint that exceed the retention threshold from the trash directory and create a new checkpoint.

Usage: hadoop fs-expunge

(14)-find

Purpose: find files and folders that satisfy the expression. If path is not configured, the default is all directories /; if the expression is not configured, it defaults to-print.

Usage: hadoop fs-find.

-the file name of the file that name pattern is looking for.

-the file name that iname pattern is looking for, case-insensitive.

-print printing.

-print0 is printed on one line, as shown in the following figure.

Example:

Hadoop fs-find /-name test-print

(15)-get

Purpose: copy files from HDFS to local.

Usage: hadoop fs-get [- ignorecrc] [- crc]

Example:

Hadoop fs-get / user/hadoop/file localfile

Hadoop fs-get hdfs://nn.example.com/user/hadoop/file localfile

(16) getfacl

Purpose: displays the ACLs (Access Control Lists) of files and folders. If the directory has a default ACL, it is displayed.

-R parameter: recursive display.

Usage:

Hadoop fs-getfacl [- R]

Options:

-R: List the ACLs of all files and directories recursively.

Path: File or directory to list.

Example:

Hadoop fs-getfacl / file

Hadoop fs-getfacl-R / dir

Exit Code:

Returns 0 on success and non-zero on error.

(17) getfattr

Purpose: displays the extended attribute name and value of the file or directory (if any)

Usage: hadoop fs-getfattr [- R]-n name |-d [- e en]

Options:

-R: recursively displays folders and files.

-n name: dumps the named extended property value.

-d: dumps all extended attribute values associated with the pathname.

-e en: the retrieved value is encoded. Valid codes are "text", "hex", and and "base64". Value encodings as text strings are enclosed in double quotation marks ("), and value encodings are hexadecimal and 64, prefixed with 0x and 0s, respectively.

Path: file or folder path.

Example:

Hadoop fs-getfattr-d / file

Hadoop fs-getfattr-R-n user.myAttr / dir

(18)-getmerge

Function: to merge all the files in a directory on HDFS and output them to a local file.

Usage: hadoop fs-getmerge [- nl]

Example:

Hadoop fs-getmerge-nl / src / opt/output.txt

Hadoop fs-getmerge-nl / src/file1.txt / src/file2.txt / output.txt

(19)-help

Purpose: help documentation

Usage: hadoop fs-help

(20)-ls

Function: view the file, which is similar to the ls command under linux.

Usage: hadoop fs-ls [- d] [- h] [- R]

Options:

-d: display only query display catalogue

-h: a unit that is more recognizable to the human eye (originally a byte).

-R: recursive display, displaying all folders and files

Example:

Hadoop fs-ls-d /

Hadoop fs-ls-h /

Hadoop fs-ls-R /

-lsr

Function: has been abandoned, the effect is the same as-ls-R

(21)-mkdir

Purpose: create a folder.

Usage: hadoop fs-mkdir [- p]

Options:

-p: create a parent directory. Similar to Unix's mkdir-p command.

Example:

Hadoop fs-mkdir / user/hadoop/dir1 / user/hadoop/dir2

Hadoop fs-mkdir hdfs://nn1.example.com/user/hadoop/dir hdfs://nn2.example.com/user/hadoop/dir

(22)-moveFromLocal

Usage: hadoop fs-moveFromLocal

What it does: similar to the put command, unlike the put command, the operation is moved (meaning that localsrc will be deleted). Localsrc should be a local file.

(23)-moveToLocal

Usage: hadoop fs-moveToLocal [- crc]

Purpose: this command has not been implemented yet, displaying "Not implemented yet".

(24)-mv

Usage: move files.

Function: hadoop fs-mv URI [URI...]

Example:

Hadoop fs-mv / user/hadoop/file1 / user/hadoop/file2

Hadoop fs-mv hdfs://nn.example.com/file1 hdfs://nn.example.com/file2 hdfs://nn.example.com/file3 hdfs://nn.example.com/dir1

(25)-put

Usage: hadoop fs-put.

Function: upload (copy) the local files to the dst directory of HDFS.

Example:

Hadoop fs-put localfile / user/hadoop/hadoopfile

Hadoop fs-put localfile1 localfile2 / user/hadoop/hadoopdir

Hadoop fs-put localfile hdfs://nn.example.com/hadoop/hadoopfile

Hadoop fs-put-hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.

(26)-rm

Usage: hadoop fs-rm [- f] [- r |-R] [- skipTrash] URI [URI.]

Function: delete files.

Options:

The-f option will not display a diagnostic message or modify the exit status to reflect an error if the file does not exist.

The-R option deletes the directory and any content under it recursively.

The-r option is equivalent to-R.

The-skipTrash option will bypass trash, if enabled, and delete the specified file (s) immediately. This can be useful when it is necessary to delete files from an over-quota directory.

Example:

Hadoop fs-rm hdfs://nn.example.com/file / user/hadoop/emptydir

(27)-rmdir

Usage: hadoop fs-rmdir [--ignore-fail-on-non-empty] URI [URI...]

Function: delete an empty directory.

Options:

-ignore-fail-on-non-empty: when using it, ignore the information that failed to delete because the folder is not empty.

(28)-rmr

Function: this method has been abandoned. The same effect as-rm-r. Recursive deletion.

(29)-setfacl

Usage: hadoop fs-setfacl [- R] [- b |-k-m |-x] | [--set]

Purpose: set the files and directories of the access control list (ACL).

Options:

-b: remove all but the basic ACL entries. Users, groups, and other entries are retained for compatibility with permission bits.

-k: delete the default ACL.

-R: operations that are recursively applied to all files and directories.

-m: modify ACL. The new project is added to ACL and the existing entry is retained.

-x: deletes the specified ACL entry. Other reserved ACL entries.

-set: completely replaces ACL, discarding all existing entries. Acl_spec must include user, group, and other authorized bit compatibility.

Acl_spec: a comma-separated list of ACL entries.

Path: modify a file or directory.

Example:

Hadoop fs-setfacl-m user:hadoop:rw- / file

Hadoop fs-setfacl-x user:hadoop / file

Hadoop fs-setfacl-b / file

Hadoop fs-setfacl-k / dir

Hadoop fs-setfacl-- set user::rw-,user:hadoop:rw-,group::r--,other::r-- / file

Hadoop fs-setfacl-R-m user:hadoop:r-x / dir

Hadoop fs-setfacl-m default:user:hadoop:r-x / dir

(30)-setrep

Usage: hadoop fs-setrep [- R] [- w]

Function: change the target copy coefficient of the file and put it into REP. The option-R recursively changes the target replica coefficient of all files in the directory specified by PATH. It takes some time for the copy coefficient to reach the target value. The option-w waits for the copy factor to match the target value.

Example:

Hadoop fs-setrep-w 3 / user/hadoop/dir1

(31)-stat

Usage: hadoop fs-stat [format].

Function: print the statistics of files / folders according to a certain format. File size (% b), type (% F), owner group (% g), first name (% n), block size (% o), copy (% r), user name (% u), modification time (% y,% Y). The default is y.

Example:

Hadoop fs-stat "% F% uvl% g% b% y% n" / file

(32)-tail

Usage: hadoop fs-tail [- f] URI

Function: output the contents of the last 1kb of the file.

Options:

-f: similar to the tail-f command in unix, when the contents of the file are updated, the output will change, which is real-time.

Example: test it with a scenario. First of all, there is a file mpwtest1.txt in the / directory of HDFS.

Command: hadoop fs-tail-f / mpwtest1.txt

Open another terminal. Enter the command: hadoop fs-appendToFile mpwtest2.txt / mpwtest1.txt

You can see a change in window 1.

(33)-test

Function: to judge file information

Usage: hadoop fs-test-[defsz] URI

Options:

-d: if the path is a directory, return 0

-e: returns 0 if the path already exists

-f: if the path is a file, return 0

-s: returns 0 if the path is not empty

-z: returns 0 if the file length is 0

URI: resource address, which can be a file or a directory.

Example:

Hadoop fs-test-e filename

(34)-text

Usage: hadoop fs-text

Function: output files in HDFS as text (including zip package, jar package, etc.)

Example: hadoop fs-text / wc.jar

(35)-touchz

Usage: hadoop fs-touchz URI [URI...]

Purpose: create an empty file.

Example: hadoop fs-touchz / hello.jar

(35)-truncate

Usage: hadoop fs-truncate [- w]

Function: truncate all file contents that match the specified length.

Options:

-w: you need to wait for the command to complete the block recovery. If there is no-w option, it may not be closed during recovery.

Length: the value at truncation. If it is 100, it will be truncated at 100B.

Paths: file address.

Example:

Hadoop fs-truncate 55 / user/hadoop/file1 / user/hadoop/file2

Hadoop fs-truncate-w 127 hdfs://nn1.example.com/user/hadoop/file1

(36)-usage

Usage: hadoop fs-usage command

Function: returns the help information of the command.

Thank you for reading this article carefully. I hope the article "how to use fs commands in HDFS" shared by the editor will be helpful to everyone. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report