Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the Apache Hadoop 2.4.1 commands?

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Editor to share with you what the Apache Hadoop 2.4.1 commands are, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!

Overview

All Hadoop commands are executed through a script in the bin/hadoop directory, and running the Hadoop script without any parameters prints a description of the command.

Usage:Hadoop [--config confdir] [COMMAND] [GENERIC_OPTIONS] [COMMAND_OPTIONS]

Hadoop has an input options parsing framework that can be used to parse parameters when running class.

COMMAND_OPTIONDescription--config confdir contains all the configuration directories, and the default directory is $HADOOP_HOME/conf

GENERIC_OPTION

COMMAND_OPTION

This collection of options is supported by multiple commands. The commands and their options are described in the following paragraphs. These commands are divided into user commands and administrator commands. General term

Dfsadmin, fs, fsck, and job and fetchdt all support all subordinate options. The application needs to implement the Tool interface before it can support common option resolution.

GENERIC_NAMEDescription--conf specifies the configuration file for a file-D = specifies a value for the attribute-jt or specifies a Job tracker. Applies only to Job. -files separates the files with commas and copies them to the map reduce cluster. Applies only to job. -libjars comma delimited jar files in classpath. Applies only to job. -archives separates files that are not archived in the calculation with commas. Applies only to job. User command

Hadoop cluster users are very convenient to be command-based.

Archive

Create a Hadoop archive, and more information can be found in the Hadoop archive.

Usage: hadoop archive-archiveName NAME *

COMMAND_OPTIONDescription-archiveName NAME the name of the created archive the working path of the src file system, usually using the regular expression dest to contain the target directory distcp of the archive file

Copy files or directories recursively. More information can be found in the Hadoop DistCp guide.

Usage:hadoop distcp

COMMAND_OPTIONDescriptionsrcurlURL source desturl destination URLfs

Usage:hadoop fs [GENERIC_OPTIONS] [COMMAND_OPTIONS]

Object to use, use hdfs dfs instead of use.

Use the client to run a generic file system.

All kinds of COMMAND_OPTIONS can be found in the File System Shell guide.

Fsck

Run a hdfs system check tool. Refer to fsck for more information.

Usage: hadoop fsck [GENERIC_OPTION] [- move |-delete |-openforwrite] [- file [- blocks [- locations | racks]]]

COMMAND_OPTIONDescriptionpath starts checking this path-move move error file to / lost+found-delete delete error file-openforwrite opens file for write-files check output file-blocks prints quick report-locations prints location of each block-racks prints network topology fetchdt for data node location

Get delegated token from NameNode. For more information, please refer to fetchdt.

Usage: hadoop fetchdt [GENERIC_OPTIONS] [--webservice]

COMMAND_OPTION

Description

FileName file name exists in the record-webservice https_address uses the http protocol instead of RPCjar

Run a jar file, and users can package their map reduce file and execute it using this command.

Usage: hadoop jar [mainClass] args...

Stream work needs to be done through this command. Examples can be found in Streaming examples.

The word count example can also be run using the jar command, and this example can also be found in Wordcount example.

Job

Interacts with map reduce job naming.

Usage: hadoop job [GENERIC_OPTIONS] [- submit] | [status] | [counter] | [- kill] | [- events] | [- history [all] [JobOutputDir]] | [- list [all]] | [kill-task] | [- fail-task] | [- set-priority]

COMMAND-OPTIONSDescription-submit job-file submits a job-status job-id prints the percentage completed by map reduce and the number of all jobs-counter job-id group name counter-name prints statistics-kill job-id kills the job-events job-id from-event-# #-of-events prints details of events received from a given range of jobtracker. -history [all] jobOutputDir prints job details, failure and death tips. To get detailed work tasks and successful attempts by specifying the [all] option-list [all] displays completed jobs. List all shows all assignments-kill-task task-id kills this task. A kill task is not a failed attempt-a fail-task task-id failed task. Failed tasks count as failed attempts-set-priority job-id priority changes the priority of job. The limited values allowed are VERY_HIGH, HIGH, NORMAL, LOW, VERY_LOWpipes.

Run a pipe job.

Usage: hadoop pipes [- conf] [- jobconf, [key=value],...] [- input] [- output] [- jar]

[- inputformat] [- map] [- partitioner] [- reduce] [- writer] [- program] [- reduces]

COMMANE_OPTIONDescription-conf pathJob profile-jobconf key=value,key=value,... Add / overwrite configuration file-input path input directory-output path output directory-jar jar filejar file-inputformat classInputFormat class-map classjava Map class-partitioner classjava partitioner-reduce classjava reduce class-number of URI-reduces numreduce executable by writer classJava RecordWriter-program executable queue

This command can interact with the queue of hadoop job.

Usage: hadoop queue [- list] | [- info [ShowJobs]] | [Showacls]

COMMAND_OPTIONDescription-list obtains the configuration list of the job queue in the system, and the queue scheduling information related to the job-info job-queue-name [- showJobs] displays the queue information and related scheduling information of the specified job queue. If there is a list of the-showJobs option, the job is submitted to the specified job queue. -showacls displays the queue name and queue operations that allow the current user to relate. The list contains only user access queues. Version

Print out the version of Hadoop.

Usage: hadoop version

CLASSNAME

You can use the hadoop script to execute any class.

Usage: hadoop CLASSNAME

The name of running this class is CLASSNAME

Classpath

Print the path to the jar file required by hadoop and the required library.

Usage: hadoop classpath

Administration command

Hadoop cluster administrators can manage clusters well based on administrator commands.

Balancer

To run a load balancing tool, the administrator can simply execute Ctrl-C to stop this operation. For more details, please refer to Rebalancer.

Usage: hadoop balancer [- threshold]

Percentage of COMMAND_OPTIONDescription-threshold threshold disk capacity. Override the default threshold. Daemonlog

Set the log view or set level for each daemon

Usage: hadoop daemonlog-getlevel

Usage: hadoop daemonlog-setlevel

COMMAND_OPTIONDescription-getlevel host:port name print runs at the log level of the host:port daemon, this command is internally connected to the http://host:port/logLevel?log=name-setlevel host:port name level setting to run at the log level of the host:port daemon, and this command is internally connected to the http://host:port/logLevel?log=namedatanode

Start a HDFS datanode.

Usage: hadoop datanode [- rollback]

COMMAND_OPTIONDescription-rollback rolls back the previous version of datanode, which should be used to stop datanode and hadoop distributed after the older version of dfsadmin

Start a hdfs management client.

Usage: hadoop dfsadmin [GENERIC_OPTIONS] [- report] [safemode enter | leave | wait | get] [- refreshNodes] [- finalizeUpgrade] [- upgradeProgress status | details | force] [- metasave filename] [- setQuota.] [- restoreFailedStorage true | false | check] [- help [cmd]]

COMMAND_OPTIONDescription-report reports basic file system information and status-safemode enter / leave / get / wait security mode maintenance commands. Namenode status of safe mode

1.name space does not accept changes (read-only)

two。 Blocks cannot be copied and deleted

NameNode starts to automatically enter safe mode and automatically leaves safe mode when the minimum percentage of configured blocks meets the minimum replication state. Safe mode can also be entered manually, but you also need to exit manually.

-refreshNodes allows connections to namenode and those that should stop or re-enable collections, reread hosts and exclude file updates to datanode. -finalizeUpgradeHDFS completes the upgrade. Datanode deletes their previous version of the working directory, followed by Namenode to do the same thing. This completes the upgrade process. -upgradeProgress status/details/force requests the current distributed upgrade status. Detailed status or mandatory upgrade. -metasave filename

Save the main data structure of NameNode to a file through the directory specified by the hadoop.log.dir attribute. If the file name already exists, it will be overwritten. The filename will contain each of the following:

1.DataNode heartbeat

two。 Blocks waiting to be copied

3. Currently copied block

4. Blocks waiting to be deleted

SetQuota quota dirname... Dirname

Set a quota for each dirname directory, the directory quota is a long integer, and the directory tree name and quantity are a hard limit. Best working directory, error report

1. The user is not an administrator

2.N is not a positive integer

3. The directory does not exist or it is a file

4. The directory will exceed the new limit

-clrQuota dirname... Dirname

Understand the quota, best working directory, and fault report for each dirname directory

1. The directory does not exist or it is a file

two。 The user is not an administrator. If the directory has no quota, there is nothing wrong with it.

-restroreFailedStorage true/false/check this option turns on / off automatic attempts to restore failed storage copies. If the failed storage is to be available again, the system will attempt to recover edits or / and fsimage from checkpoint. 'check' returns the current setting-help [cmd] displays help for a given command, or displays all help mradmin if no command is specified

Run a mr management client.

Usage: hadoop mradmin [GENERIC_OPTION] [- refreshQueueAcls]

COMMAND_ OPTIONDescription-refreshQueueAcls updates the acl queue jobtracker used by hadoop

Run a MapReduce job tracker.

Usage:hadoop jobtracker [dumpConfiguration]

The configuration used by COMMAND_OPTIONDescription-dumpconfiguration to dump JobTracker and JobTracker and exits in JSON format are configured using standard output. Namenode

Run namenode. For more information about upgrade, rollback, and initialization, please refer to Upgrade Rollback.

Usage: hadoop namenode [- format] [- upgrade] [- rollback] [- finalize] [- importCheckpoint]

COMMAND_OPTIONDescription-format formats namenode, which starts namenode, formats it, and then closes it. -upgradeNamenode should enable the option to upgrade the new hadoop version distributed. -the previous version of rollback is rolled back. It should not be used until the old version of hadoop distributed cluster is stopped. -finalize determines that the status of the previous file system will be deleted, the most recent upgrade becomes permanent, and the rollback option will no longer be available. Close namenode when finished-importCheckpoint loads image from a checkpoint directory and saves it to the current one. Read the Checkpoint directory secondarynamenode from the property fs.checkpoint.dir

Run HDFS secondary namenode. Please refer to Secondary Namenode for more information.

Usage:hadoop secondraynamenode [- checkpoint [force]] | [- geteditsize]

COMMAND_OPTIONDescription-checkPoint [force] if EditLog .size > = fs.checkpoint.size, checkpoint secondary namenode. If-force is used, the checkpoint ignores EditLog. Size-geteditsize

Print Edit Log size

Tasktracker

Run a tasktracker node of MapReduce.

Usage: hadoop tasktracker

The above is all the contents of the article "what are the Apache Hadoop 2.4.1 commands?" Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report