Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Installation and configuration of link monitoring tool pinpoint

2025-04-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Tags: APM

A preliminary understanding of the pinpoint call chain tool

=

In this article, we focus on the architecture, installation and deployment of pinpoint tools.

1. Introduction to pinpoint tools:

   Pinpoint is an APM tool for large-scale distributed systems written by Java, which some people like to call call chain systems and distributed tracking systems. We know that the front end initiates a query request to the background, and the background service may invoke multiple services, and each service may invoke other services, and finally return the results and summarize them on the page. If an exception occurs in a link, it is difficult for engineers to locate exactly which service invocation caused the problem. The function of Pinpoint and other related tools is to track the complete call link of each request and collect the performance data of each service on the call link, so that engineers can quickly locate the problem.

The impact of    pinpoint on server performance is very small (only about 3% increase in resource utilization). Installing agent is non-invasive, and you only need to add 3 sentences to the tested Tomcat and make a probe to monitor the whole program. Similar tools include google's Dapper,twitter Zipkin, Taobao's EdleEye, Dianping's CAT, domestic open source skywalking, commercial Lingyun APM tools, etc.

2. Pinpoint system architecture:

Pinpoint-Collector: collect all kinds of performance data; Pinpoint-Agent: probe associated with the application you run Pinpoint-Web: display the collected data in the form of WEB web pages HBase Storage: store the collected data in HBase 3. Installation and configuration of Hbase database:

   Pinpoint uses Hbase as the stored database. HBase is the database of Apache Hadoop, which can provide random, real-time read and write access to large data. It is an open source implementation of Google's BigTable. The goal of HBase is to store and process large data, more specifically, using only ordinary hardware configuration, to be able to handle large databases of thousands of rows and columns. HBase is an open source, distributed, multi-version, column-oriented storage model. You can use either the local file system directly or Hadoop's HDFS file storage system. In order to improve the reliability of data and the robustness of the system, and to give full play to the ability of HBase to deal with large data, it is better to use HDFS as a file storage system.

The server architecture of    HBase follows a simple master-slave server architecture, which is composed of a HRegion Server cluster and a HBase Master server. HBase Master is responsible for managing all HRegionServer, while all RegionServer in HBase is coordinated through ZooKeeper and handles errors that may be encountered while the HBase server is running.

   HBase Master Server itself does not store any data in HBase, and HBase logical tables may be divided into multiple Region and then stored in the HRegion Server group. What is stored in HBase Master Server is the mapping from data to HRegion Server.

There are also three modes for    HBase installation: stand-alone mode, pseudo-distributed mode, and fully distributed mode, which is only described here. The premise is that the Hadoop cluster and Zookeeper are installed and running correctly.

# install zookeeper:tar xzvf zookeeper-3.4.8.tar.gz-C / usr/local/cd / usr/local/ln-sv zookeeper-3.4.8 zookeepercd / usr/local/zookeepermkdir-p data3mkdir-p logs3cd / usr/local/zookeeper/confcp-r zoo_sample.cfg zoo.cfgvim zoo.cfgtickTime=2000initLimit=5syncLimit=2dataDir=/usr/local/zookeeper/data3dataLogDir=/usr/local/zookeeper/logs3clientPort=2181server.189=192.168.1.189:2888:3888 server.190=192.168.1.190 on the first node : 2888 usr/local/zookeeper/data3/myid 3888 server.191=192.168.1.191:2888:3888echo "189" > > / usr/local/zookeeper/data3/myid# registers the identity content of the server to the / server file This 189 is the last bit of the IP address of my server. / usr/local/zookeeper/bin/zkServer.sh start# start service / usr/local/zookeeper/bin/zkServer.sh stop# stop service / usr/local/zookeeper/bin/zkServer.sh status# view master and slave roles. Leader is the master role, and follower is the basic time unit used in the slave role tickTime: zookeeper. The millisecond value dataDir: the data directory of zk. Can be any directory dataLogDir: log directory, can also be any directory. If this parameter is not set, the same setting clientPort will be used as dataDir: the port number that listens for client connections. The default is that the 2181initLimit zookeeper cluster contains multiple server, one of which is leader, and the rest of the server in the cluster is follower. The initLimit parameter configures the maximum heartbeat time between follower and leader when initializing the connection. At this time, this parameter is set to 5, indicating that the time limit is 5 times tickTime, that is, 5*2000=10000ms=10ssyncLimit: this parameter configures the maximum length of time for sending messages, requests and replies between leader and follower. At this point, the parameter is set to 2, indicating that the time limit is 2 times tickTime, that is, 4000msserver.X=A:B:C, where X is a number, indicating the number of server. An is the IP address where the server is located. B configure the port that the server uses to exchange messages with the leader in the cluster. C configures the port used when electing leader. If you are configuring pseudo-cluster mode, the B and C parameters of each server must be different. What is a pseudo-cluster? it is a cluster made by running three instances on one server. The other two servers install zk in the same way as the first one, and so does the configuration file. The only difference is that the content of the myid file should correspond to the last bit of the IP address of your own server.

After    installs the zk cluster, you need the HDFS file system, because the Hbase database depends on the HDFS file system, but the Hbase database can also use the local file system. It's just that using the HDFS file system makes more use of the robustness and performance of the system; because I've just come into contact with the Hbase database, I'm not very familiar with big data's middleware, so my HDFS file system is stand-alone. Then I installed the Hbase database with a cluster structure. It is divided into Hmaster and HRegionServer.

# all three servers where hbase databases are installed must be able to log in using root And the port is the default port 22chattr-I / etc/ssh/sshd_configsed-I 's#PermitRootLogin no#PermitRootLogin yes#g' / etc/ssh/sshd_configsed-I' s#AllowUsers ttadm#AllowUsers ttadm root#g' / etc/ssh/sshd_configsed-I's configuration of public and private keys on this machine of master Copy the public key to the other two machines ssh-keygen-t rsassh-copy-id 192.168.1.190ssh-copy-id 192.168.1.191cd / usr/local/hbase-1.4.10/conf/vim hbase-env.sh# this parameter, if true, means to use the zk that comes with Hbase, because we have installed a separate zk cluster, so we need to set this parameter to falseexport HBASE_MANAGES_ZK=false# The java implementation to use. Java 1.7 + required.export JAVA_HOME=/usr/local/jdk1.8.0_131# Extra Java CLASSPATH elements. Optional.export HBASE_CLASSPATH=/usr/local/hbase-1.4.10/conf# vim hbase-site.xml hbase.rootdir hdfs://192.168.1.189:9000/hbase hbase.cluster.distributed true hbase.master.port 16000 hbase.zookeeper.quorum 192.168.1.189 hbase-site.xml 2181192.168.1.190 hbase-site.xml 2181192.168.1.191 usr/local/zookeeper/data3 2181 hbase-site.xml this configuration file mainly configures the database storage path of Hbase Rely on some information of zk Hbase database storage can use either local storage or the HDFS file system. For local storage, the format is as follows: hbase.rootdir file:/usr/src/pinpoint_resource/hbase-1.2.4/datavim regionservers 192.168.1.189192.168.1.190192.168.1.19 configure the server address cd / usr/local/hbase-1.4.10/bin./start-hbase.sh# of regionserver to start the hbase database. Before starting, you need to copy the installation program and configuration files of hbase to the other two machines. Then after configuring secret-free login and executing start-hbase.sh, it will automatically HRegionServer on the other two machines. The way to check is the jps command # two from the node to view the hbase process [root@SZ1PRDOAM00AP010 ~] # jps17408 HRegionServer # indicates hbase's RegionServer16931 QuorumPeerMain # this is the zk process 18475 Bootstrap24047 Jps# view the hbase process [root@SZ1PRDOAM00AP009 conf] # jps21968 SecondaryNameNode # hdfs file system process 21793 DataNode # this is the hdfs file system process Process of storing data 98883 Jps73397 QuorumPeerMain # zk 81286 Bootstrap74201 HRegionServer # hbase process 21659 NameNode # hdfs file system process, managing metadata 74061 HMaster # # initializing pinpoint database wget https://github.com/naver/pinpoint/blob/1.8.5/hbase/scripts/hbase-create.hbasehbase shell hbase-create.hbase# if you need to clear the data, download the hbase-drop.hbase script

After the Hbase database is successfully installed, there is an web administration page that can view the database tables. Http://192.168.1.189:16010/master-status, you can view it by visiting port 16010. You can see the TABLE we just initialized.

   because the Hbase database depends on the HDFS file system, we installed the HDFS file system, by the way. To install the HDFS file system, first follow hadoop.

   Hadoop Common is the content of the independent subproject of HDFS and MapReduce separated after the Hadoop0.2 version. It is the core part of Hadoop. It can provide some common toolsets for other modules, such as serialization mechanism, Hadoop abstract file system FileSystem, system configuration tool Configuration, and KPI for software development on its platform. Other Hadoop subprojects are built on this basis.

   HDFS is a distributed file storage system, similar to FAT32,NTFS, is a file format, and the underlying HDFS is the basis of data storage management in Hadoop architecture. It is a highly fault-tolerant system that can detect and deal with hardware failures and run on low-cost general hardware.

   Hbase is the Hadoop database, or Hadoop database. It is a database suitable for unstructured data storage, and HBase is column-based rather than row-based. HBase is a scalable, highly reliable, high-performance, distributed and column-oriented dynamic schema database based on HDFS and oriented to structured data. Hbase data is generally stored on HDFS. Hadoop HDFS provides them with highly reliable underlying storage support.

Cd / usr/localwget https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.9.0/hadoop-2.9.0.tar.gztar xzvf hadoop-2.9.0.tar.gzcd / usr/local/hadoop-2.9.0/etc/hadoopvim hadoop-env.sh# set JAVA_HOME in this file So that it is correctly defined onexport JAVA_HOME=/usr/local/jdk1.8.0_13# View hadoop version cd / usr/local/hadoop-2.9.0/bin [root@SZ1PRDOAM00AP009 bin] #. / hadoop versionHadoop 2.9.0Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git-r 756ebc8394e473ac25feac05fa493f6d612e6c50Compiled by arsuresh on 2017-11-13T23:15ZCompiled with protoc 2.5.0From source with checksum 0a76a9a32a5257331741f8d5932f183This command was run using / usr/local/hadoop-2.9 .0 / share/hadoop/common/hadoop-common-2.9.0.jar [root@SZ1PRDOAM00AP009 bin] # # configure the hadoop environment variable [root@SZ1PRDOAM00AP009 bin] # cat / etc/profile.d/hadoop.sh export HADOOP_HOME=/usr/local/hadoopexport HADOOP_MAPRED_HOME=$HADOOP_HOMEexport HADOOP_COMMON_HOME=$HADOOP_HOMEexport HADOOP_HDFS_HOME=$HADOOP_HOMEexport YARN_HOME=$HADOOP_HOMEexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/binexport HADOOP_INSTALL=$HADOOP_HOMEexport HADOOP_OPTS= "- Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"

After installing hadoop, we then mount the hdfs file system. The HDFS file system and the Hadoop software package are one. Just modify a few configuration files.

Vim / usr/local/hadoop-2.9.0/etc/hadoop/core-site.xml fs.default.name hdfs://192.168.1.189:9000 # configure nameNode: the address to which the request is received The client will request a copy of the configuration data at the address vim / usr/local/hadoop-2.9.0/etc/hadoop/hdfs-site.xml dfs.replication 1 dfs.name.dir file:///usr/local/hadoop/hdfs/namenode dfs.data.dir file:///usr/local/hadoop/hdfs/datanode # because we are stand-alone So 1 copy is configured. The storage directory is the directory of local files. # ssh password-free login ssh localhost# if not supported, execute the following three commands in order $ssh-keygen-t rsa-P''- f ~ / .ssh/id_rsa $cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys $chmod 0600 ~ / .ssh/authorized_keys# format # formatting hdfs namenode-format # start after formatting is completed for the first time using hdfs We can start hdfs and sbin/start-dfs.sh # startup is complete. Let's take a look at the hdfs process ps-ef | grep hdfs# you will see: nameNode\ dataNode\ secondaryNameNode, which means that the startup is successful, otherwise, check the error message in the corresponding log under logs. # after installing the HDFS file system, you can view the status through the web management page; http://192.168.1.189:50070/dfshealth.html#tab-overview4. Installation and configuration of pinpoint:

   pinpoint generally consists of three components. Pinpoint-Collector is used to collect data, Pinpoint-Web is used to display data, and pinpoint-agent is used to collect client. Hbase is used to store data. Let's first take a look at the installation of pinpoint-Collector

Wget https://github.com/naver/pinpoint/releases/download/1.8.5/pinpoint-agent-1.8.5.tar.gzwget https://github.com/naver/pinpoint/releases/download/1.8.5/pinpoint-collector-1.8.5.warwget https://github.com/naver/pinpoint/releases/download/1.8.5/pinpoint-web-1.8.5.war# pinpoint-collector and pinpoint-web are war packages, so you can run them in tomcat. If you are in a production environment, it is recommended that the collector and web management interface be installed on different machines Cd / usrl/local/tomcat/webapps/rm-rf * unzip pinpoint-collector-1.6.1.war-d ROOTcd / usr/local/tomcat/webapps/ROOT/WEB-INF/classesvim pinpoint-collector.propertiescluster.zookeeper.address=192.168.1.191# modify zookeeper address vim hbase.propertieshbase.client.host=192.168.1.191hbase.client.port=2181# configuration database storage address / usr/local/tomcat/bin/startup.sh # launch tomcatcd / usrl/local/tomcat/webapps/ The rm-rf * unzip pinpoint-web-1.8.5.war-d ROOTcd / usr/local/tomcat/webapps/ROOT/WEB-INF/classesvim hbase.propertieshbase.client.host=192.168.1.191hbase.client.port=2181# configuration database storage address vim pinpoint-web.propertiescluster.enable=falsecluster.web.tcp.port=9997cluster.zookeeper.address=192.168.1.191# web clustering feature is disabled Then configure the address of zk / usr/local/tomcat/bin/startup.sh # start tomcatmkdir-p / usr/local/pinpoint-agentcd / usr/localtar xzvf pinpoint-agent-1.8.5.tar.gz-C pinpoint-agentvim pinpoint.configprofiler.collector.ip=192.168.1.190# configure the address of the collector server cd scripts [root@SZ1PRDOAM00AP009 script] # sh networktest.sh CLASSPATH=./tools/pinpoint-tools-1.8.5.jar:2019-10-15 16:13: 17 [INFO] (com.navercorp.pinpoint.bootstrap.config.DefaultProfilerConfig) configuration loaded successfully.UDP-STAT:// SZ1PRDOAM00AP010.bf.cn = > 192.168.1.190 SUCCESS 9995 [SUCCESS] UDP-SPAN:// SZ1PRDOAM00AP010.bf.cn = > 192.168.1.190 configuration loaded successfully.UDP-STAT:// SZ1PRDOAM00AP010.bf.cn 9996 [SUCCESS] TCP:// SZ1PRDOAM00AP010.bf.cn = > 192.168.1.190 configuration loaded successfully.UDP-STAT:// SZ1PRDOAM00AP010.bf.cn 9994 [SUCCESS] [root@SZ1PRDOAM00AP009 script] # # there is a network test script You can test whether the network between agent and collector is working. I have a problem here. Port 9995 has been blocked all the time. After subsequent troubleshooting, configure the ip and hostnames of the three hosts into the / etc/hosts file. Vim / usr/local/tomcat/bin/catalina.sh JAVA_OPTS= "$JAVA_OPTS-javaagent:/usr/local/pinpoint-agent/pinpoint-bootstrap-1.8.5.jar" JAVA_OPTS= "$JAVA_OPTS-Dpinpoint.agentId=gytest" JAVA_OPTS= "$JAVA_OPTS-Dpinpoint.applicationName=gytest01" # to add agent, you only need to modify the catalina.sh startup script to add the jar package path of pinpoint and the logo of the application. -Dpinpoint.agentId-uniquely marks the application where the agent is running (e.g., loan-33)-Dpinpoint.applicationName-groups many of the same application instances into a single service (e.g., loan) # Note: pinpoint.agentId must be globally unique to identify the application instance, and all applications that share the same pinpoint.applicationName are treated as multiple instances of a single service

   has not written a technical blog for nearly three months. Recently, my wife has given birth to a second child, and there are many things at home. In addition, the work of the company is also busy at the end of the year, so I have not taken the time to write a blog. I hope you can understand. Recently, I changed a project in the company, mainly about the unified monitoring project. When it comes to APM link tracking, zabbix monitoring, business monitoring and so on, I will share some experience with you if I have time. Thank you for your continued attention. My Wechat official account is "Cloud era IT Operation and maintenance", you can scan the code to follow.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report