In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly shows you "what are the common ports and definition methods of hadoop2.x", the content is simple and clear, and I hope it can help you solve your doubts. Let me lead you to study and learn about "what are the common ports and definition methods of hadoop2.x".
Parts of a Hadoop cluster typically use multiple ports, some for interaction between daemon, and some for RPC access and HTTP access. With the increase of Hadoop peripheral components, we can't remember which port corresponds to which application, so we collect records so as to query.
Here are the components we use: HDFS, YARN, HBase, Hive, ZooKeeper:
Component node default port configuration purpose description HDFSDataNode50010dfs.datanode.addressdatanode service port, port for data transmission HDFSDataNode50075dfs.datanode.http.addresshttp service port HDFSDataNode50020dfs.datanode.ipc.addressipc service port HDFSNameNode50070dfs.namenode.http-addresshttp service port HDFSNameNode50470dfs.namenode.https-addresshttps service port HDFSNameNode8020fs.defaultFS receives the Client connection RPC port, which is used to obtain file system metadata information. HDFSjournalnode8485dfs.journalnode.rpc-addressRPC Service HDFSjournalnode8480dfs.journalnode.http-addressHTTP Service HDFSZKFC8019dfs.ha.zkfc.portZooKeeper FailoverController Applications manager (ASM) port YARNResourceManager8030yarn.resourcemanager.scheduler.addressscheduler component for NN HAYARNResourceManager8032yarn.resourcemanager.addressRM IPC port YARNResourceManager8031yarn.resourcemanager.resource-tracker.addressIPCYARNResourceManager8033yarn.resourcemanager.admin.addressIPCYARNResourceManager8088yarn.resourcemanager.webapp.addresshttp service port YARNNodeManager8040yarn.nodemanager.localizer.addresslocalizer IPCYARNNodeManager8042yarn.nodemanager.webapp.addresshttp service port container manager port in YARNNodeManager8041yarn.nodemanager.addressNM YARNJobHistory Server10020mapreduce.jobhistory.addressIPCYARNJobHistory Server19888mapreduce.jobhistory.webapp.addresshttp service port HBaseMaster60000hbase.master.portIPCHBaseMaster60010hbase.master.info.porthttp service port HBaseRegionServer60020hbase.regionserver.portIPCHBaseRegionServer60030hbase .regionserver.info.porthttp service port HBaseHQuorumPeer2181hbase.zookeeper.property.clientPortHBase-managed ZK mode Using a separate ZooKeeper cluster does not enable the port. HBaseHQuorumPeer2888hbase.zookeeper.peerportHBase-managed ZK mode, using a separate ZooKeeper cluster will not enable this port. HBaseHQuorumPeer3888hbase.zookeeper.leaderportHBase-managed ZK mode, using a separate ZooKeeper cluster will not enable this port. Export PORT= in HiveMetastore9083/etc/default/hive-metastore to update the default port
Export HIVE_SERVER2_THRIFT_PORT= in HiveHiveServer10000/etc/hive/conf/hive-env.sh to update the default port
In ZooKeeperServer2181/etc/zookeeper/conf/zoo.cfg, the port where clientPort= provides services to clients is server.x= [hostname]: nnnnn [: nnnnn] in ZooKeeperServer2888/etc/zookeeper/conf/zoo.cfg. The blue part of follower is used to connect to leader, and only listen on this port on leader. Server.x= [hostname]: nnnnn [: nnnnn] in ZooKeeperServer3888/etc/zookeeper/conf/zoo.cfg, the blue part is used for leader election. Only required if the electionAlg is 1 and 2 or 3 (the default).
All port protocols are based on TCP.
For all hadoop daemon where a Web UI (HTTP service) exists, there is the following url:
/ logs
List of log files for downloading and viewing
/ logLevel
Allows you to set the logging level for log4j, similar to hadoop daemonlog
/ stacks
Stack trace for all threads, which is very helpful for debug
/ jmx
Metrics on the server side, output in JSON format.
/ jmx?qry=Hadoop:* returns all hadoop-related metrics.
The / jmx?get=MXBeanName::AttributeName query specifies the value of the property specified by bean, for example, / jmx?get=Hadoop:service=NameNode,name=NameNodeInfo::ClusterId returns ClusterId.
The processing class for this request: org.apache.hadoop.jmx.JMXJsonServlet
And a specific Daemon has specific URL path-specific corresponding information.
NameNode: http://:50070/
/ dfshealth.jsp
HDFS information page with links to view the file system
/ dfsnodelist.jsp?whatNodes= (DEAD | LIVE)
Datanode that displays the status of DEAD or LIVE
/ fsck
Run the fsck command, which is not recommended when the cluster is busy!
DataNode: http://:50075/
/ blockScannerReport
Each datanode specifies an interval to verify block information.
The above is all the contents of the article "what are the common ports and definition methods of hadoop2.x". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 298
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.