Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the hdfs commands?

2025-01-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail what the hdfs command has, Xiaobian thinks it is quite practical, so share it with you as a reference, I hope you can gain something after reading this article.

hdfs common command:

Part 1: hdfs file system commands

Category I: file path addition, deletion, modification and search series:

hdfs dfs -mkdir dir Create folder

hdfs dfs -rmr dir Delete folder dir

hdfs dfs -ls View catalog file information

hdfs dfs -lsr recursively view file directory information

hdfs dfs -stat path Returns information about the specified path

Category 2: Space Size View Series Commands:

hdfs dfs -du -h dir User-friendly display of file sizes in readable form

hdfs dfs -dus uri recursively displays the size of the target file

hdfs dfs -du path/file Displays the size of the target file file

Category III: Permission Management:

hdfs dfs -chgrp group path Change the group to which a file belongs

hdfs dfs -chgrp -R /dir Recursively change the group to which dir directory belongs

hdfs dfs -chmod [-R] permissions-path Change permissions on files

hdfs dfs -chown owner[-group] /dir Change the owner of a file

hdfs dfs -chown -R owner[-group] /dir recursively change dir directory owner

Category IV: File operation (upload download copy) Series:

hdfs dfs -touchz a.txt Create an empty file of length 0 a.txt

hdfs dfs -rm file delete file

hdfs dfs -put file dir upload file dir file

hdfs dfs -put filea dir/fileb Upload filea to dir and rename filea to fileb

hdfs dfs -get file dir Download file to local folder

hdfs dfs -getmerge hdfs://Master: 9000/data/SogouResult.txt CombinedResult Combines multiple files in hdfs into one file, and the merged file is located in the local system.

hdfs dfs -cat file View file

hdfs fs -text/dir/a.txt If the file is in text format, it is equivalent to cat, if the file is compressed format, it will be decompressed first and then viewed.

hdfs fs -tail/dir/a.txt View the last 1000 bytes of the a.txt file below dir directory

hdfs dfs -copyFromLocal localsrc path Copy files from local

hdfs dfs -copyToLocal/hdfs/a.txt/local/a.txt Copy from hdfs to local

hdfs dfs -copyFromLocal /dir/source /dir/target Copies files from the original path to the destination path

hdfs dfs -mv/path/a.txt/path/b.txt Move files from directory a to directory b, which can be used to recover files from recycle bin

Category 5: Judgement Series:

hdfs fs -test -e/dir/a.txt Determine whether the file exists, positive 0 negative 1

hdfs fs -test -d /dir Determine if dir is a directory, plus 0 minus 1

hdfs fs -test -z/dir/a.txt Determine whether the file is empty, positive 0 negative 1

Category VI: System Function Management Category:

hdfs dfs -expunge Empty Recycle Bin

hdfs dfsadmin -safemode enter Enter safe mode

hdfs dfsadmin -sfaemode leave Leave safe mode

hdfs dfsadmin -decommission datanodename Closes a datanode node

hdfs dfsadmin -finalizeUpgrade finalizes upgrade operations

hdfs dfsadmin -upgradeProcess status View upgrade operation status

hdfs version View hdfs version

hdfs daemonlog -getlevel Print log levels for daemons running in

hdfs daemonlog -setlevel Sets the log level of daemons running on

hdfs dfs -setrep -w Number of copies-R path Sets the number of copies of a file

Part II: Operation and Maintenance Command

start-dfs.sh Boot namenode, datanode, boot file system

stop-dfs.sh Close file system

start-yarn.sh Start resource manager, node manager

stop-yarn.sh Close resourcemanager,nodemanager

start-all.sh Start hdfs, yarn

stop-all.sh Close hdfs, yarn

hdfs-daemon.sh start datanodeStart datanodeAlone

start-balancer.sh-t 10% Start Load Balancer, try not to use it on namenode node

hdfs namenode -format format file system

hdfs namenode -upgrade After distributing a new version of hdfs, namenode should be started with the upgrade option

hdfs namenode -rollback rolls back namenode to the previous version. This option is executed after stopping the cluster and distributing the old hdfs version.

hdfs namenode -finalize finalize deletes the previous state of the file system. The most recent upgrade will be persisted, the rollback option will no longer be available, after the upgrade finalization operation, it will stop namenode, distribute the old hdfs version and use it

hdfs namenode importCheckpoint mounts a mirror from the checkpoint directory specified by fs.checkpoint.dir and saves it to the current checkpoint directory

Part 3: The mapreduce command

hdfs jar file.jar Execute jar package program

hdfs job -kill job_201005310937_0053 Kill the jar package program being executed

hdfs job -submit submit job

hdfs job -status Print map and reduce percent complete and all counters.

hdfs job -counter Print the value of the counter.

hdfs job -kill Kills the specified job.

hdfs job -events Prints details of events received by jobtracker in a given range.

hdfs job -history [all]

hdfs job -history Print job details, failures, and reasons for being killed. More details about an assignment such as successful tasks, attempted tasks, etc. can be viewed by specifying the [all] option.

hdfs job -list [all] Displays all jobs. - List shows only jobs that are about to be completed.

hdfs job -kill -task Kill task. Killed quests do not penalize failed attempts.

hdfs job -fail -task Fails the task. A failed task will penalize a failed attempt.

Part 4: hdfs system check tool fsck

hdfs fsck -move damaged files to/lost+found

hdfs fsck -delete Delete damaged files.

hdfs fsck -openforwrite prints out open files for writing.

hdfs fsck -files Print out the file being checked.

hdfs fsck -blocks prints out block information reports.

hdfs fsck -locations Prints out the location of each block.

hdfs fsck -racks prints out the data-node topology of the network.

Part 5: Running pipes jobs

hdfs pipes -configuration of conf jobs

hdfs pipes -jobconf , , ... Add/overwrite configuration items for jobs

hdfs pipes -input input directory

hdfs pipes -output output directory

hdfs pipes -jar Jar filename

hdfs pipes -inputformat InputFormat class

hdfs pipes -map Java Map class

hdfs pipes -partitioner Java Partitioner

hdfs pipes -reduce Java Reduce class

hdfs pipes -writer Java RecordWriter

hdfs pipes -URI of program executable

hdfs pipes -reduces reduce number

About "hdfs command what" this article is shared here, I hope the above content can be of some help to everyone, so that you can learn more knowledge, if you think the article is good, please share it to let more people see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report