Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to view hdfs cluster status through web

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "how to check hdfs cluster status through web". In daily operation, I believe many people have doubts about how to view hdfs cluster status through web. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "how to view hdfs cluster status through web". Next, please follow the editor to study!

Question guide:

1. How to view hdfs cluster status through web

two。 How to view the ResourceManager status running on the primary node master through web

3. How to view the status of NodeManager resources running on slave nodes through web

What information can 4.JobHistory view?

This article is a continuation of the latest hadoop2 fully distributed high reliable installation documentation based on hadoop2.

After hadoop2.2 is installed, how should we use it? here are some brief introductions.

1. You can log in to the Web console to view the status of the HDFS cluster and visit the following address:

Http://master:50070/

Copy the code

Source:

Component: HDFS

Node: NameNode

Default port: 50070

Configuration: dfs.namenode.http-address

Usage description: Port of http service

Hadoop2.x common port and its definition method (can be collected for later query)

2. ResourceManager runs on the master node master, and the status can be checked in the Web console.

Http://master:8088/

Copy the code

If your hostname is not master, follow the format below.

Http://ip address: 8088 /

Copy the code

Or

Http://hostname:8088/

Copy the code

Here are the sources of 8088:

Attributes in yarn-site.xml:

Yarn.resourcemanager.webapp.address

Master:8088

3. NodeManager runs on the slave node, and you can view the resource status of the corresponding node through the Web console, such as node slave1:

Http://slave1:8042/

Copy the code

Source:

Component: YARN

Node: NodeManager

Default port: 8042

Configuration: yarn.nodemanager.webapp.address

Usage description: http service port

4. Manage JobHistory Server

Start JobHistory Server. You can view the information of cluster computing tasks through the Web console, and execute the following command:

Mr-jobhistory-daemon.sh start historyserver

Copy the code

Port 19888 is used by default.

By visiting http://master:19888/

Source:

Component: YARN

Node: JobHistory Server

Default port: 19888

Configuration: mapreduce.jobhistory.webapp.address

Usage description: http service port

All of the above ports can be found in the articles on common ports and definition methods of hadoop2.x (which can be collected for later query).

To terminate JobHistory Server, execute the following command:

Mr-jobhistory-daemon.sh stop historyserver

Copy the code

Overview

The file system (FS) shell includes various similar commands that interact directly with Hadoop Distributed File System (HDFS). Hadoop also supports other file systems, such as Local FS, HFTP FS, S 3 FS, and others. FS shell is called as follows:

[Bash shell] View copy code in plain text

?

one

Bin/hadoop fs

All FS shell commands take URIs path parameters. The The URI format is: / / authority/path. For HDFS file systems, scheme is hdfs. Where scheme and authority parameters are optional

If not specified, the default scheme. File is used. A hdfs file or directory, such as / parent/child, can be hdfs://namenodehost/parent/child or simplified to / parent/child (default configuration is set to point to hdfs://namenodehost). Most FS shell commands correspond to Unix commands. Each command has a different description. Send error messages to standard error output and output to stdout.

AppendToFile [add File]

Usage: hadoop fs-appendToFile. Add a single src, or multiple srcs from the local file system to the destination file system. Read from standard input and append to the destination file system.

Hadoop fs-appendToFile localfile / user/hadoop/hadoopfile

Hadoop fs-appendToFile localfile1 localfile2 / user/hadoop/hadoopfile

Hadoop fs-appendToFile localfile hdfs://nn.example.com/hadoop/hadoopfile

Hadoop fs-appendToFile-hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.

Return code:

Return 0 success return 1 error

Cat

Usage: hadoop fs-cat URI [URI...]

Outputs the contents of the path-specified file to stdout

Example:

Hadoop fs-cat hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2

Hadoop fs-cat file:///file3 / user/hadoop/file4

Return code:

Return 0 success return 1 error

Checksum

Usage: hadoop fs-checksum URI

Returns checksum file information

Example:

Hadoop fs-checksum hdfs://nn1.example.com/file1

Hadoop fs-checksum file:///etc/hosts

Chgrp

Usage: hadoop fs-chgrp [- R] GROUP URI [URI...]

Change the group to which the file belongs. Must be a file owner or superuser. For more information, please visit Permissions Guide.

Option

Using-R will make the change recursive under the directory structure

Chmod

Usage: hadoop fs-chmod [- R] URI [URI...]

Change the permissions of the file. Using-R will make the changes recursive under the directory structure. Must be a file owner or superuser. For more information, please visit Permissions Guide.

Option

Using-R will make the changes recursive under the directory structure.

Chown

Usage: hadoop fs-chown [- R] [OWNER] [: [GROUP]] URI [URI]

Change the owner of the file. Using-R will make the changes recursive under the directory structure. Must be a file owner or superuser. For more information, please visit Permissions Guide.

Option

Using-R will make the changes recursive under the directory structure.

CopyFromLocal

Usage: hadoop fs-copyFromLocal URI

Similar to the put command, it is important to point out that this restriction is a local file

Options:

The-f option overrides the existing target file

CopyToLocal

Usage: hadoop fs-copyToLocal [- ignorecrc] [- crc] URI

Similar to the get command, except that the destination path is a local file.

Count

Usage: hadoop fs-count [- Q] [- h] [- v] counts the number of directories and the size of files and files under the directory. Output columns: DIR_COUNT, FILE_COUNT, CONTENT_SIZE, PATHNAME

[number of directories, number of files, total size, path name]

Output column with-count-Q is: QUOTA, REMAINING_QUATA, SPACE_QUOTA, REMAINING_SPACE_QUOTA, DIR_COUNT, FILE_COUNT, CONTENT_SIZE, PATHNAME

[configuration, remaining metrics, space quota, remaining space quota, number of directories, number of files, total size, path name]

The-h option, size readable mode.

The The-v option displays a header line.

Example:

Hadoop fs-count hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2

Hadoop fs-count-Q hdfs://nn1.example.com/file1

Hadoop fs-count-Q-h hdfs://nn1.example.com/file1

Hdfs dfs-count-Q-h-v hdfs://nn1.example.com/file1

Return code:

Return 0 success return 1 error

Cp

Usage: hadoop fs-cp [- f] [- p |-p [topax]] URI [URI...] Copy files, this command allows you to copy multiple files to a directory.

'The raw.*' Namespace extension attribute is preserved

(1) Source and target files support them (hdfs only)

(2) all source and destination file paths are under the / .targets / raw directory structure.

Deciding whether to use the raw.* namespace to extend attributes depends on the-P option

Options:

The-f option will be overwritten if the file already exists.

The-p option saves the file properties [topx] (timestamps, ownership, permission, ACL, XAttr). If you specify-p with no parameters, save timestamps, ownership, permission. If-pa is specified, permissions are retained because ACL is a super group of permissions. Whether or not to save raw namespace attributes depends on whether to use-p to determine

Example:

Hadoop fs-cp / user/hadoop/file1 / user/hadoop/file2

Hadoop fs-cp / user/hadoop/file1 / user/hadoop/file2 / user/hadoop/dir

Return code:

Return 0 success return 1 error

CreateSnapshot

Check out HDFS Snapshots Guide.

DeleteSnapshot

Check out HDFS Snapshots Guide.

Df [see how much hdfs space is left]

Usage: hadoop fs-df [- h] URI [URI...]

Show remaining space

Options:

The-h option makes it easier for people to read (e.g. 64.0m instead of 67108864)

Example:

Hadoop dfs-df / user/hadoop/dir1

Du

Usage: hadoop fs-du [- s] [- h] URI [URI...] Displays the file size of the given directory and the directories it contains, if only the file shows only the file size

Options:

The-s option summarizes the length of the file, rather than the actual individual file.

The-h option shows that the format is easier to read (e.g. 64.0m instead of 67108864)

Example:

Hadoop fs-du / user/hadoop/dir1 / user/hadoop/file1 hdfs://nn.example.com/user/hadoop/dir1

Return code:

Return 0 success return 1 error

Dus

Usage: hadoop fs-dus

Show statistical file length

Note: this command has been enabled, just hadoop fs-du-s

Expunge

Usage: hadoop fs-expunge

Empty the recycling bin. See Recycle Bin Features for more information about HDFS Architecture Guide

Find

Usage: hadoop fs-find. Finds all files that match the specified expression and applies the selected action to them. If no path is specified, the current directory is found by default. If you do not specify an expression default-print

The following main expressions:

-name mode

-iname mode

If

The value is TRUE if the file base name matching pattern uses a standard file system combination. If you use-iname matching, it is not case-sensitive.

-print

-print0Always

The value is TRUE. The current path is written to standard output. If you use the-print0 expression, the ASCII NULL character is appended.

Do the following:

Expression-an expression

Expression-and expression

Expression expression

The and operator concatenates two expressions and returns true. True if the two-word expression returns true. It is implied by the juxtaposition of two expressions, so it does not need to be explicitly specified. If the first fails, the second expression is not applied.

Example:

Hadoop fs-find /-name test-print

Return code:

Return 0 success return 1 error

Get

Usage: hadoop fs-get [- ignorecrc] [- crc] copy the file to the local file.

Copy files to the local file system. [copying files with failed CRC verification with-ignorecrc option (welcome to correct if translation is incorrect)]

Files that fail the CRC check may be copied with the-ignorecrc option.

The file CRC can be copied using the CRC option.

Example:

Hadoop fs-get / user/hadoop/file localfile

Hadoop fs-get hdfs://nn.example.com/user/hadoop/file localfile

Return code:

Return 0 success return 1 error

At this point, the study on "how to check the status of hdfs clusters through web" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report