In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Hadoop Shell command (version 2.7.2)
Official document 2.7.2
Official Chinese 1.0.4
Overview of appendToFilecatchecksumchgrpchmodchowncopyFromLocalcopyToLocalcountcpcreateSnapshotdeleteSnapshotdfdudusexpungefindgetgetfaclgetfattrgetmergehelplslsrmkdirmoveFromLocalmoveToLocalmvputrenameSnapshotrmrmdirrmrsetfaclsetfattrsetrepstattailtesttexttouchztruncateusage
File system (FS) shell includes various shell-like commands that interact directly with Hadoop distributed File system (HDFS) and other file systems supported by Hadoop, such as local FS,HFTP FS,S3 FS, and so on. FS shell is called in the following ways:
Bin/hadoop fs
All FS shell commands take the URI path as an argument. The URI format is scheme://authority/path. For HDFS file systems, scheme is hdfs, and for local file systems, scheme is file. The scheme and authority parameters are optional, and if not specified, the default scheme specified in the configuration is used. You can specify a HDFS file or directory (for example, / parent/child) as hdfs://namenodehost/parent/child or simply / parent/child (assuming your configuration is set to point to hdfs://namenodehost).
Most FS Shell commands behave similar to their corresponding Unix Shell commands, but the differences are noted in the details of the use of each command below. The error message is output to stderr, and other information is output to stdout.
If you are using HDFS, hdfs dfs is a synonym.
Refer to the command manual for general shell options.
AppendToFile
Usage:
Hadoop fs-appendToFile...
Attach a single src or multiple srcs from the local file system to the destination file system. Input is also read from stdin and attached to the destination file system.
Hadoop fs-appendToFile localfile / user/hadoop/hadoopfilehadoop fs-appendToFile localfile1 localfile2 / user/hadoop/hadoopfilehadoop fs-appendToFile localfile hdfs://nn.example.com/hadoop/hadoopfilehadoop fs-appendToFile-hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.
Exit code:
Returns 0 on success and 1 on error.
Cat
Usage:
Hadoop fs-cat URI [URI...]
Outputs the contents of the path-specified file to stdout.
Example: hadoop fs-cat hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2hadoop fs-cat file:///file3 / user/hadoop/file4
Exit code:
Returns 0 on success and-1 on error.
Checksum
Usage:
Hadoop fs-checksum URI
Returns the checksum information of the file.
Example:
Hadoop fs-checksum hdfs://nn1.example.com/file1hadoop fs-checksum file:///etc/hostschgrp
Usage:
Hadoop fs-chgrp [- R] GROUP URI [URI...]
Change the group association of the file. The user must be the owner of the file, or he must be a superuser. Additional information is in the permissions Guide.
Option
The-R option will be changed recursively through the directory structure. Chmod
Usage:
Hadoop fs-chmod [- R] URI [URI...]
Change the permissions of the file. Use-R to change recursively through the directory structure. The user must be the owner of the file, or he must be a superuser. Additional information is in the permissions Guide.
Option-R option will be changed recursively through the directory structure. Chown
Usage:
Hadoop fs-chown [- R] [OWNER] [: [GROUP]] URI [URI]
Change the owner of the file. The user must be superuser. Additional information is in the permissions Guide.
Option
The-R option will be changed recursively through the directory structure. CopyFromLocal
Usage:
Hadoop fs-copyFromLocal URI
Similar to the put command, but the source is limited to local file references.
Option:-f if the target already exists, overwrite the target. CopyToLocal
Usage:
Hadoop fs-copyToLocal [- ignorecrc] [- crc] URI
Similar to the get command, but the destination is limited to local file references.
Count
Usage:
Hadoop fs-count [- Q] [- h] [- v]
Calculates the number of directories, files, and bytes under the path that matches the specified file pattern. -the output of count is listed as: DIR_COUNT,FILE_COUNT,CONTENT_SIZE,PATHNAME
The output of-count-Q is listed as: QUOTA,REMAINING_QUATA,SPACE_QUOTA,REMAINING_SPACE_QUOTA,DIR_COUNT,FILE_COUNT,CONTENT_SIZE,PATHNAME
The-h option displays the size in a human-readable format.
The-v option displays the title line.
Example:
Hadoop fs-count hdfs://nn1.example.com/file1hdfs: / / nn2.example.com/file2hadoop fs-count-Q hdfs://nn1.example.com/file1hadoop fs-count-Q-h hdfs://nn1.example.com/file1hdfs dfs-count-Q-h-v hdfs://nn1.example.com/ file1``shell exit code: return 0 on success and-1 on error. # # cp usage: ````shellhadoop fs-cp [- f] [- p |-p [topax]] URI [URI...]
Copy the file from the source to the destination. This command also allows multiple sources, in which case the destination must be a directory.
If (1) the source and destination file systems support them (HDFS only), and (2) all source and destination pathnames are in the / .destination / raw hierarchy, leave 'raw'. Namespace extension attributes. Whether to keep raw. The determination of namespace xattrs has nothing to do with the-p (reserved) flag.
Options:
If the target already exists, the-f option overwrites the target.
The-p option preserves the file attribute [topx] (timestamp, ownership, permissions, ACL,XAttr). If-p is specified and there is no arg, the timestamp, ownership, and permissions are preserved. If-pa is specified, permissions are retained because ACL is a set of super permissions. Determines whether to retain the original namespace extension attribute independent of the-p flag.
Example:
Hadoop fs-cp / user/hadoop/file1 / user/hadoop/file2hadoop fs-cp / user/hadoop/file1 / user/hadoop/file2 / user/hadoop/dir
Exit code:
Returns 0 on success and-1 on error.
CreateSnapshot
See the HDFS Snapshot Guide.
DeleteSnapshot
See the HDFS Snapshot Guide.
Df
Usage:
Hadoop fs-df [- h] URI [URI...]
Displays the available space.
Option: the-h option will format the file size in a "human-readable" manner (for example, 64.0m instead of 67108864)
Example:
Hadoop dfs-df / user/hadoop/dir1du
Usage:
Hadoop fs-du [- s] [- h] URI [URI...]
Displays the size or length of files and directories contained in a given directory in case it is just a file.
Option: the-s option causes a summary of the length of the file to be displayed instead of a single file. The-h option will format the file size in a "human-readable" manner (for example, 64.0m instead of 67108864)
Example:
Hadoop fs-du / user/hadoop/dir1 / user/hadoop/file1 hdfs://nn.example.com/user/hadoop/dir1
Exit code: returns 0 on success and-1 on error.
Dus
Usage:
Hadoop fs-dus
Displays a summary of the file length.
Note: this command is not recommended. Instead, use hadoop fs-du-s.
Expunge
Usage:
Hadoop fs-expunge
Empty the rubbish. For more information about the trash feature, see the HDFS Architecture Guide.
Find
Usage:
Hadoop fs-find.
Finds all files that match the specified expression and applies the selected action to them. If no path is specified, the default is the current working directory. If no expression is specified, the default is-print.
Identify the following main expressions:
-name pattern
-iname pattern
If the base name of the file matches the pattern using the standard file system globbing, it evaluates to true. If you use-iname, the match is case-insensitive.
-print0Always
The evaluation is true. Writes the current pathname to standard output. If you use the-print0 expression, the ASCII NULL character is appended.
The following operators are recognized:
Expression-an expression
Expression-and expression
Expression expression
Logical AND operator used to concatenate two expressions. If both subexpressions return true, true is returned. Implied by the juxtaposition of two expressions, so it does not need to be explicitly specified. If the first expression fails, the second expression is not applied.
Example:
Hadoop fs-find /-name test-print
Exit code:
Returns 0 on success and-1 on error.
Get
Usage:
Hadoop fs-get [- ignorecrc] [- crc]
Copy the file to the local file system. You can use the-ignorecrc option to copy files that failed the CRC verification. You can use the-crc option to copy files and CRC.
Example:
Hadoop fs-get / user/hadoop/file localfilehadoop fs-get hdfs://nn.example.com/user/hadoop/file localfile
Exit code:
Returns 0 on success and-1 on error.
Getfacl
Usage:
Hadoop fs-getfacl [- R]
Displays access control lists (ACL) for files and directories. If the directory has a default ACL, getfacl also displays the default ACL.
Option:-R: recursively lists the ACL of all files and directories. Path: the file or directory to be listed.
Example:
Hadoop fs-getfacl / filehadoop fs-getfacl-R / dir
Exit code:
Returns 0 on success and non-zero on error.
Getfattr
Usage:
Hadoop fs-getfattr [- R]-n name |-d [- e en]
Displays the extended attribute name and value of the file or directory, if any.
Options:
-R: recursively lists the attributes of all files and directories. -n name: dumps the specified extended property value. -d: dumps all extended attribute values associated with pathname. -e encoding: code values are encoded after retrieval. Valid codes are "text", "hex" and "base64". Values encoded as text strings are enclosed in double quotation marks ("), and values encoded as hexadecimal and base64 are prefixed with 0x and 0s, respectively. Path: a file or directory.
Example:
Hadoop fs-getfattr-d / filehadoop fs-getfattr-R-n user.myAttr / dir
Exit code:
Returns 0 on success and non-zero on error.
Getmerge
Usage:
Hadoop fs-getmerge [- nl]
Take the source directory and destination file as input, and connect the files in src to the destination local file. Optionally,-nl can be set to allow line breaks (LF) to be added at the end of each file.
Example:
Hadoop fs-getmerge-nl / src/ opt/output.txthadoop fs-getmerge-nl / src/file1.txt / src/file2.txt / output.txt
Exit code:
Returns 0 on success and non-zero on error.
Help
Usage:
Hadoop fs-help
Returns the usage output.
Ls
Usage:
Hadoop fs-ls [- d] [- h] [- R]
Options:
-d: the directory is listed as a pure file. -h: format the file size in a human-readable manner (for example, 64.0m instead of 67108864). -R: recursively lists the subdirectories encountered.
For files, ls returns the stat of the file using the following format:
Permissions number_of_replicas userid groupid filesize modification_date modification_time filename
For a directory, it returns a list of its direct child nodes, as in Unix. The catalog is listed as follows:
Permissions userid groupid modification_date modification_time dirname
By default, the files in the directory are sorted by file name.
Example:
Hadoop fs-ls / user/hadoop/file1
Exit code:
Returns 0 on success and-1 on error.
Lsr
Usage:
Hadoop fs-lsr
The recursive version of ls.
Note: this command is not recommended. Instead, use hadoop fs-ls-R
Mkdir
Usage:
Hadoop fs-mkdir [- p]
Take the path uri as a parameter and create a directory.
Options:
The-p option behaves much like Unix mkdir-p, creating a parent directory along the path.
Example:
Hadoop fs-mkdir / user/hadoop/dir1 / user/hadoop/dir2hadoop fs-mkdir hdfs://nn1.example.com/user/hadoop/dir hdfs://nn2.example.com/user/hadoop/dir
Exit code:
Returns 0 on success and-1 on error.
MoveFromLocal
Usage:
Hadoop fs-moveFromLocal
Similar to the put command, except that the source localsrc is deleted after replication.
MoveToLocal
Usage:
Hadoop fs-moveToLocal [- crc]
Displays the "not implemented" message.
Mv
Usage:
Hadoop fs-mv URI [URI...]
Move the file from the source to the destination. This command allows multiple sources, in which case the destination needs to be a directory. Moving files across file systems is not allowed.
Example:
Hadoop fs-mv / user/hadoop/file1 / user/hadoop/file2hadoop fs-mv hdfs://nn.example.com/file1 hdfs://nn.example.com/file2 hdfs://nn.example.com/file3 hdfs://nn.example.com/dir1
Exit code:
Returns 0 on success and-1 on error.
Put
Usage:
Hadoop fs-put...
Copy a single src or multiple srcs from the local file system to the destination file system. Input is also read from stdin and written to the destination file system.
Hadoop fs-put localfile / user/hadoop/hadoopfilehadoop fs-put localfile1 localfile2 / user/hadoop/hadoopdirhadoop fs-put localfile hdfs://nn.example.com/hadoop/hadoopfilehadoop fs-put-hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.
Exit code:
Returns 0 on success and-1 on error.
RenameSnapshot
See the HDFS Snapshot Guide.
Rm
Usage:
Hadoop fs-rm [- f] [- r |-R] [- skipTrash] URI [URI...]
Deletes the file specified as args.
Options:
If the file does not exist, the-f option does not display a diagnostic message or modify the exit status to reflect the error. The-R option recursively deletes the directory and anything under it. The-r option is equivalent to-R. The-skipTrash option bypasses the trash can (if enabled) and immediately deletes the specified file. This is useful when you need to delete files from an overquota directory.
Example:
Hadoop fs-rm hdfs://nn.example.com/file / user/hadoop/emptydir
Exit code:
Returns 0 on success and-1 on error.
Rmdir
Usage:
Hadoop fs-rmdir [--ignore-fail-on-non-empty] URI [URI...]
Delete the directory.
Options:
-- ignore-fail-on-non-empty: do not fail if the directory still contains files when using wildcards.
Example:
Hadoop fs-rmdir / user/hadoop/emptydirrmr
Usage:
Hadoop fs-rmr [- skipTrash] URI [URI...]
The recursive version of the deletion.
Note: this command is not recommended. Instead, use hadoop fs-rm-r
Setfacl
Usage:
Hadoop fs-setfacl [- R] [- b |-k-m |-x] | [- set]
Sets the access control list (ACL) for files and directories.
Options:
-b: delete all entries except the basic ACL entry. Retain users, groups, and other entries to be compatible with permission bits. -k: delete the default ACL. -R: recursively applies operations to all files and directories. -m: modify ACL. The new entry is added to the ACL and the existing entry is retained. -x: deletes the specified ACL entry. Keep other ACL entries. -- set: completely replaces ACL, discarding all existing entries. The acl_spec must include users, group entries, and others for compatibility with permission bits. Acl_spec: a comma-separated list of ACL entries. Path: the file or directory to be modified.
Example:
Hadoop fs-setfacl-m user:hadoop:rw- / filehadoop fs-setfacl-x user:hadoop / filehadoop fs-setfacl-b / filehadoop fs-setfacl-k / dirhadoop fs-setfacl-- set user::rw-,user:hadoop:rw-,group::r--,other::r-- / filehadoop fs-setfacl-R-m user:hadoop:r-x / dirhadoop fs-setfacl-m default:user:hadoop:r-x / dir
Exit code:
Returns 0 on success and non-zero on error.
Setfattr
Usage:
Hadoop fs-setfattr-n name [- v value] |-x name
Sets the extended property name and value of the file or directory.
Options:
-b: delete all entries except the basic ACL entry. Retain users, groups, and other entries to be compatible with permission bits. -n name: extended attribute name. -v value: extends the property value. There are three different ways to encode this value. If the parameter is enclosed in double quotation marks, the value is the string within the quotation marks. If the parameter is prefixed with 0x or 0X, it is treated as a hexadecimal number. If the parameter begins with 0 or 0S, it is treated as base64 encoding. -x name: delete the extended attribute. Path: a file or directory.
Example:
Hadoop fs-setfattr-n user.myAttr-v myValue / filehadoop fs-setfattr-n user.noValue / filehadoop fs-setfattr-x user.myAttr / file
Exit code:
Returns 0 on success and non-zero on error.
Setrep
Usage:
Hadoop fs-setrep [- R] [- w]
Change the replication factor of the file. If path is a directory, the command recursively changes the replication factor of all files under the directory tree rooted in path.
Options:
The-w flag requests the command to wait for the replication to complete. This may take a long time. The-R flag is accepted for backward compatibility. It doesn't work.
Example:
Hadoop fs-setrep-w 3 / user/hadoop/dir1
Exit code:
Returns 0 on success and-1 on error.
Stat
Usage:
Hadoop fs-stat [format].
Prints statistics for relevant files / directories in the specified format. The format accepts block (% b), type (% F), owner group name (% g), name (% n), block size (% o), copy (% r), file size u of owner user name (%), and modification date (% yforce% Y). % y displays the UTC date as "yyyy-MM-dd HH:mm:ss" and% Y shows the number of milliseconds since UTC on January 1, 1970. If no format is specified, y is used by default.
Example:
Hadoop fs-stat "% F% uvl% g% b% y% n" / file
Exit code: returns 0 on success and-1 on error.
Tail
Usage:
Hadoop fs-tail [- f] URI
Displays the last kilobyte of the file to stdout.
Options:
The-f option outputs additional data as the file grows, as in Unix.
Example:
Hadoop fs-tail pathname
Exit code: returns 0 on success and-1 on error.
Test
Usage:
Hadoop fs-test-[defsz] URI
Options:
The-drangf path is a directory and returns 0. -e: returns 0 if the path exists. -f: returns 0 if the path is a file. -s: returns 0 if the path is not empty. -z: returns 0 if the file length is zero.
Example:
Hadoop fs-test-e filenametext
Usage:
Hadoop fs-text
Get the source file and output the file in text format. The allowed formats are zip and TextRecordInputStream.
Touchz
Usage:
Hadoop fs-touchz URI [URI...]
Create a zero-length file.
Example:
Hadoop fs-touchz pathname
Exit code: returns 0 on success and-1 on error.
Truncate
Usage:
Hadoop fs-truncate [- w]
Truncates all files that match the specified file pattern to the specified length.
Options:
The requirements of the-w flag, if necessary, wait for the block recovery command to be completed. If there is no-w flag, the file may remain open for a period of time during the restore. In the meantime, the file cannot be reopened for appending.
Example:
Hadoop fs-truncate 55 / user/hadoop/file1 / user/hadoop/file2hadoop fs-truncate-w 127 hdfs://nn1.example.com/user/hadoop/file1usage
Usage:
Hadoop fs-usage command
Returns help for a single command.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.