Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the shell commands of HDFS

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces what HDFS shell commands have, which have certain reference value. Interested friends can refer to them. I hope you will gain a lot after reading this article. Let Xiaobian take you to understand them together.

FS Shell

The Shell command to invoke the file system (FS) should be of the form bin/hadoop fs. All FS shell commands take URI paths as arguments. URI format is scheme://authority/path. For HDFS file systems, scheme is hdfs, and for local file systems, scheme is file. The scheme and authority parameters are optional. If not specified, the default scheme specified in the configuration will be used. An HDFS file or directory such as/parent/child can be represented as hdfs://namenode:namenodeport/parent/child, or more simply/parent/child(assuming the default value in your configuration file is namenode:namenodeport). Most FS Shell commands behave similarly to their Unix counterparts, except as noted below in the detailed description of each command. Error messages are output to stderr and other messages are output to stdout.

cat

How to use: hadoop fs -cat URI [URI …]

Exports the contents of the path-specified file to stdout.

Examples:

hadoop fs -cat hdfs://host1:port1/file1 hdfs://host2:port2/file2

hadoop fs -cat file:///file3 /user/hadoop/file4

Return value:

Success returns 0, failure returns-1.

chgrp

Hadoop fs -chgrp [-R] GROUP URI [URI …] Change group association of files. With -R, make the change recursively through the directory structure. The user must be the owner of files, or else a super-user. Additional information is in the Permissions User Guide. -->

Change the group to which the file belongs. Using-R causes changes to recursively occur under the directory structure. The user of the command must be the file owner or superuser. For more information, see the HDFS Rights User Guide.

chmod

How to use: hadoop fs -chmod [-R] URI [URI …]

Change permissions on files. Using-R causes changes to recursively occur under the directory structure. The user of the command must be the file owner or superuser. For more information, see the HDFS Rights User Guide.

chown

Usage: hadoop fs -chown [-R] [OWNER][:[GROUP]] URI [URI ]

Change the owner of the file. Using-R causes changes to recursively occur under the directory structure. The user of the command must be a superuser. For more information, see the HDFS Rights User Guide.

copyFromLocal

Hadoop fs -copyFromLocal URI

Similar to put except that the source path is a local file.

copyToLocal

Usage: hadoop fs -copyToLocal [-ignorecrc] [-crc] URI

Similar to get except that the destination path is a local file.

cp

How to use: hadoop fs -cp URI [URI …]

Copies files from the source path to the destination path. This command allows multiple source paths, in which case the destination path must be a directory.

Examples:

hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2

hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2 /user/hadoop/dir

Return value:

Success returns 0, failure returns-1.

du

How to use: hadoop fs -du URI [URI …]

Displays the size of all files in the directory, or when only one file is specified, the size of that file.

Examples:

hadoop fs -du /user/hadoop/dir1 /user/hadoop/file1 hdfs://host:port/user/hadoop/dir1

Return value:

Success returns 0, failure returns-1.

dus

How to use: hadoop fs -dus

Displays the size of the file.

expunge

How to use: hadoop fs -expunge

Empty the recycle bin. Please refer to the HDFS design documentation for more information on recycle bin features.

get

Hadoop fs -get [-ignorecrc] [-crc]

Copy files to the local file system. You can copy files that fail CRC checks with the-ignore rc option. Copy files and CRC information using the-crc option.

Examples:

hadoop fs -get /user/hadoop/file localfile

hadoop fs -get hdfs://host:port/user/hadoop/file localfile

Return value:

Success returns 0, failure returns-1.

getmerge

Hadoop fs -getmerge [addnl]

Accepts a source directory and a destination file as input, and concatenates all files in the source directory to the destination file. addnl is optional and specifies that a newline character be added to the end of each file.

ls

How to use: hadoop fs -ls

If it is a file, file information is returned in the following format:

File name File size Modification date Modification time Permission User ID Group ID

If it is a directory, a list of its immediate subfiles is returned, as in Unix. The directory returns the following information to the list:

directory name

Modification Date Modification Time Permission User ID Group ID

Examples:

hadoop fs -ls /user/hadoop/file1 /user/hadoop/file2 hdfs://host:port/user/hadoop/dir1 /nonexistentfile

Return value:

Success returns 0, failure returns-1.

lsr

How to use: hadoop fs -lsr

Recursive version of ls command. Similar to ls -R in Unix.

mkdir

How to use: hadoop fs -mkdir

Create these directories by accepting the uri specified by the path as an argument. Its behavior is similar to Unix mkdir -p, which creates parent directories at all levels of the path.

Examples:

hadoop fs -mkdir /user/hadoop/dir1 /user/hadoop/dir2

hadoop fs -mkdir hdfs://host1:port1/user/hadoop/dir hdfs://host2:port2/user/hadoop/dir

Return value:

Success returns 0, failure returns-1.

movefromLocal

How to use: dfs -moveFromLocal

Output a "not implemented" message.

mv

How to use: hadoop fs -mv URI [URI …]

Move files from the source path to the destination path. This command allows multiple source paths, in which case the destination path must be a directory. Moving files between different file systems is not allowed.

Examples:

hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2

hadoop fs -mv hdfs://host:port/file1 hdfs://host:port/file2 hdfs://host:port/file3 hdfs://host:port/dir1

Return value:

Success returns 0, failure returns-1.

put

How to use: hadoop fs -put...

Copy single or multiple source paths from the local file system to the destination file system. It also supports reading input from standard input and writing input to the target file system.

hadoop fs -put localfile /user/hadoop/hadoopfile

hadoop fs -put localfile1 localfile2 /user/hadoop/hadoopdir

hadoop fs -put localfile hdfs://host:port/hadoop/hadoopfile

hadoop fs -put - hdfs://host:port/hadoop/hadoopfile

Read input from standard input.

Return value:

Success returns 0, failure returns-1.

rm

How to use: hadoop fs -rm URI [URI …]

Delete the specified file. Delete only non-empty directories and files. Refer to the rmr command for recursive deletion.

Examples:

hadoop fs -rm hdfs://host:port/file /user/hadoop/emptydir

Return value:

Success returns 0, failure returns-1.

rmr

How to use: hadoop fs -rmr URI [URI …]

Recursive version of delete.

Examples:

hadoop fs -rmr /user/hadoop/dir

hadoop fs -rmr hdfs://host:port/user/hadoop/dir

Return value:

Success returns 0, failure returns-1.

setrep

How to use: hadoop fs -setrep [-R]

Change the copy coefficient of a file. - The R option is used to recursively change the copy factor for all files in the directory.

Examples:

hadoop fs -setrep -w 3 -R /user/hadoop/dir1

Return value:

Success returns 0, failure returns-1.

stat

How to use: hadoop fs -stat URI [URI …]

Returns statistics for the specified path.

Examples:

hadoop fs -stat path

Return value:

Success returns 0, failure returns-1.

tail

Hadoop fs -tail [-f] URI

Output the last 1K bytes of the file to stdout. The-f option is supported, and the behavior is consistent with Unix.

Examples:

hadoop fs -tail pathname

Return value:

Success returns 0, failure returns-1.

test

Hadoop fs -test -[ezd] URI

Options:

-e Checks if the file exists. Returns 0 if present.

-z Checks if the file is 0 bytes. If yes, return 0.

-d Returns 1 if the path is a directory, 0 otherwise.

Examples:

hadoop fs -test -e filename

text

How to use: hadoop fs -text

Exports the source file to text format. The allowed formats are zip and TextRecordInputStream.

touchz

How to use: hadoop fs -touchz URI [URI …]

Create an empty file of 0 bytes.

Examples:

hadoop -touchz pathname

Return value:

Success returns 0, failure returns-1.

Thank you for reading this article carefully. I hope that the article "What are the shell commands of HDFS" shared by Xiaobian will be helpful to everyone. At the same time, I hope that everyone will support you a lot and pay attention to the industry information channel. More relevant knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report