In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
This article continues with the previous article, address: IDC cluster related indicators acquisition
After obtaining the metrics of the corresponding IDC machines, you also need to collect the metrics of HDFS and YARN in the Hadoop cluster. Generally speaking, there are two ways of thinking:
The first, of course, can be obtained by using CM API, because tssql in CM provides a very rich variety of metrics monitoring.
The second way is to get the data through jmxJ, in fact, by accessing the above-mentioned relevant URL, and then parsing the resulting json, so as to get the data we need, and finally merge the data together to perform the collection operation regularly.
In the actual practice, jmx is used for acquisition. The url requests involved are as follows:
Http://localhost:50070/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo
Http://localhost:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystemState
The specific code implementation ideas are as follows:
First of all, you need to have a httpclient to initiate a request to server to get the corresponding json data. Here we write StatefulHttpClient.
Secondly, the utility class JsonUtil is used for the conversion between data and objects of type Json.
Of course, we also need to sort out the monitoring indicators we need to obtain, and write our entity. Here we take HDFS as an example, mainly HdfsSummary and DataNodeInfo.
The code for this case is on github, address:
Here is the core code:
MonitorMetrics.java:
Public class MonitorMetrics {/ / beans is the initial key / / structure in the json string returned through jmx is {"beans": [{"": ",": ",...}]} List beans = new ArrayList (); public List getBeans () {return beans;} public void setBeans (List beans) {this.beans = beans } public Object getMetricsValue (String name) {if (beans.isEmpty ()) {return null;} return beans.get (0) .getOrDefault (name, null);}}
HadoopUtil.java:
Public class HadoopUtil {public static long gbLength = 1073741824L; public static final String hadoopJmxServerUrl = "http://localhost:50070"; public static final String jmxServerUrlFormat ="% s/jmx?qry=%s "; public static final String nameNodeInfo =" Hadoop:service=NameNode,name=NameNodeInfo "; public static final String fsNameSystemState =" Hadoop:service=NameNode,name=FSNamesystemState "; public static HdfsSummary getHdfsSummary (StatefulHttpClient client) throws IOException {HdfsSummary hdfsSummary = new HdfsSummary (); String namenodeUrl = String.format (jmxServerUrlFormat, hadoopJmxServerUrl, nameNodeInfo) MonitorMetrics monitorMetrics = client.get (MonitorMetrics.class, namenodeUrl, null, null); hdfsSummary.setTotal (doubleFormat (monitorMetrics.getMetricsValue ("Total"), gbLength)); hdfsSummary.setDfsFree (doubleFormat (monitorMetrics.getMetricsValue ("Free"), gbLength)); hdfsSummary.setDfsUsed (doubleFormat (monitorMetrics.getMetricsValue ("Used"), gbLength); hdfsSummary.setPercentUsed (doubleFormat (monitorMetrics.getMetricsValue ("PercentUsed") HdfsSummary.setSafeMode (monitorMetrics.getMetricsValue ("Safemode"). ToString (); hdfsSummary.setNonDfsUsed (doubleFormat (monitorMetrics.getMetricsValue ("NonDfsUsedSpace"), gbLength)); hdfsSummary.setBlockPoolUsedSpace (doubleFormat (monitorMetrics.getMetricsValue ("BlockPoolUsedSpace"), gbLength)); hdfsSummary.setPercentBlockPoolUsed (doubleFormat (monitorMetrics.getMetricsValue ("PercentBlockPoolUsed"); hdfsSummary.setPercentRemaining (doubleFormat (monitorMetrics.getMetricsValue ("PercentRemaining"); hdfsSummary.setTotalBlocks ((int) monitorMetrics.getMetricsValue ("TotalBlocks") HdfsSummary.setTotalFiles ((int) monitorMetrics.getMetricsValue ("TotalFiles")); hdfsSummary.setMissingBlocks ((int) monitorMetrics.getMetricsValue ("NumberOfMissingBlocks")); String liveNodesJson = monitorMetrics.getMetricsValue ("LiveNodes"). ToString (); String deadNodesJson = monitorMetrics.getMetricsValue ("DeadNodes"). ToString (); List liveNodes = dataNodeInfoReader (liveNodesJson); List deadNodes = dataNodeInfoReader (deadNodesJson); hdfsSummary.setLiveDataNodeInfos (liveNodes); hdfsSummary.setDeadDataNodeInfos (deadNodes) String fsNameSystemStateUrl = String.format (jmxServerUrlFormat, hadoopJmxServerUrl, fsNameSystemState); MonitorMetrics hadoopMetrics = client.get (MonitorMetrics.class, fsNameSystemStateUrl, null, null); hdfsSummary.setNumLiveDataNodes ((int) hadoopMetrics.getMetricsValue ("NumLiveDataNodes")); hdfsSummary.setNumDeadDataNodes ((int) hadoopMetrics.getMetricsValue ("NumDeadDataNodes")); hdfsSummary.setVolumeFailuresTotal ((int) hadoopMetrics.getMetricsValue ("VolumeFailuresTotal")); return hdfsSummary } public static List dataNodeInfoReader (String jsonData) throws IOException {List dataNodeInfos = new ArrayList (); Map nodes = JsonUtil.fromJsonMap (String.class, Object.class, jsonData); for (Map.Entry node: nodes.entrySet ()) {Map info = (HashMap) node.getValue (); String nodeName = node.getKey (). Split (":") [0]; DataNodeInfo dataNodeInfo = new DataNodeInfo () DataNodeInfo.setNodeName (nodeName); dataNodeInfo.setNodeAddr (info.get ("infoAddr"). ToString (). Split (":") [0]); dataNodeInfo.setLastContact ((int) info.get ("lastContact")); dataNodeInfo.setUsedSpace (doubleFormat (info.get ("usedSpace"), gbLength); dataNodeInfo.setAdminState (info.get ("adminState"). ToString ()) DataNodeInfo.setNonDfsUsedSpace (doubleFormat (info.get ("nonDfsUsedSpace"), gbLength); dataNodeInfo.setCapacity (doubleFormat (info.get ("capacity"), gbLength); dataNodeInfo.setNumBlocks ((int) info.get ("numBlocks")); dataNodeInfo.setRemaining (doubleFormat ("remaining"), gbLength); dataNodeInfo.setBlockPoolUsed (doubleFormat (info.get ("blockPoolUsed"), gbLength)) DataNodeInfo.setBlockPoolUsedPerent (doubleFormat (info.get ("blockPoolUsedPercent")); dataNodeInfos.add (dataNodeInfo);} return dataNodeInfos;} public static DecimalFormat df = new DecimalFormat ("#"); public static double doubleFormat (Object num, long unit) {double result = Double.parseDouble (String.valueOf (num)) / unit; return Double.parseDouble (df.format (result)) } public static double doubleFormat (Object num) {double result = Double.parseDouble (String.valueOf (num)); return Double.parseDouble (df.format (result));} public static void main (String [] args) {String res = String.format (jmxServerUrlFormat, hadoopJmxServerUrl, nameNodeInfo); System.out.println (res);}}
MonitorApp.java:
Public class MonitorApp {public static void main (String [] args) throws IOException {StatefulHttpClient client = new StatefulHttpClient (null); HadoopUtil.getHdfsSummary (client). PrintInfo ();}}
The final results are as follows:
With regard to the acquisition of YARN indicators, the idea is similar, so it will no longer be shown here.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.