Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Deploy Tomcat and its load cluster

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

As a free and open source web application server, Tomcat server is a lightweight application server. It is widely used in small and medium-sized systems and where there are not many concurrent users. It is the first choice for developing and debugging JSP programs. Generally speaking, Tomcat, like Apache or Nginx, has the function of dealing with HTML pages, but its ability to handle static HTML is far less than that of Apache or Nginx. So Tomcat is usually a servlet and JSP container that runs separately on the back end. The following figure shows the Tomcat application scenario:

Users always access the apache/Nginx server, and then the apache/Nginx server is handed over to the Tomcat server for processing. All servers are connected to shared storage servers, so that users can access data the same every time. Apache/Nginx is used for scheduling, that is, the well-known load balancing, which is not explained much about load balancing.

In general, a Tomcat site may have a single point of failure and unable to cope with too many complex and diverse customer requests, so it can not be used in the production environment alone, so it is necessary to use load balancing to solve these problems.

Nginx is a very excellent http server software, which can support responses of up to 50000 concurrent connections, has strong static resource processing capacity, runs stably, and consumes very low memory, CPU and other system resources. At present, many large websites use Nginx server as the reverse proxy and load balancer of the back-end website program to improve the load concurrency ability of the whole site.

Start the preparatory work and set up the following environment. In order to simplify, the shared storage server will not be deployed. The environment is as follows:

First, prepare before deployment:

All three servers are deployed using centos7, and the software used in the deployment process is as follows:

System image of centos7

Nginx and Tomcat source code packages can be downloaded from the official website or from the link I provided (packaged as an ISO image file): link: link: https://pan.baidu.com/s/1LgiBUuU5a1SQNh5qeqBODw

Extraction code: h9px

2. Configure Tomcat server:

1. Start deploying Tomcat on the 192.168.1.1 server (the configuration of the firewall is omitted here. Please configure the firewall to allow relevant traffic. I directly disabled the firewall here. The port number used by Tomcat by default is 8080nginx. The port number used by default is 80):

[root@localhost ~] # java-version # check whether JDK is installed, if not Install openjdk version "1.8.0mm 161" OpenJDK Runtime Environment (build 1.8.0_161-b14) OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode) [root@localhost media] # tar zxf apache-tomcat-8.5.16.tar.gz-C / usr/src # decompress Tomcat package [root@localhost media] # cd / usr/src/ [root@localhost src] # mv apache-tomcat-8.5.16/ / usr/local/tomcat8 # Tomcat without compilation After decompression, you can use [root@localhost src] # mkdir-p / web/webapp1 # to build the web site of Java. Used to store the website file [root@localhost src] # vim / web/webapp1/index.jsp # to create an index.jsp test page JSP test1 page [root@localhost src] # vim / usr/local/tomcat8/conf/server.xml # modify the main configuration file of Tomcat. . # navigate to this line, and then add the following two lines: the default directory of documents for docBase:web applications; # path= "set default" class; # reloadable settings to monitor whether the "class" changes; [root@localhost ~] # / usr/local/tomcat8/bin/startup.sh # start the service, and if you stop the service, just change the startup.sh to shutdown.sh. Using CATALINA_BASE: / usr/local/tomcat8Using CATALINA_HOME: / usr/local/tomcat8Using CATALINA_TMPDIR: / usr/local/tomcat8/tempUsing JRE_HOME: / usrUsing CLASSPATH: / usr/local/tomcat8/bin/usr/local/tomcat8/bin/tomcat-juli.jarTomcat started. [root@localhost src] # netstat-antp | grep 8080 # check whether the default port 8080 is listening. Tcp6 0: 8080:: * LISTEN 13220/java

Local test visit: 192.168.1.1purl 8080, and you can see the following test page:

At this point, the Tomcat of 192.168.1.1 has been configured, and the configuration of another Tomcat server 192.168.1.2 is exactly the same as that of 192.168.1.1. You can configure the above configuration on the 192.168.1.2 server once, but in order to see the effect of load balancing during the test, we can see that the server visited each time is not the same. The test page of the Tomcat server of 192.168.1.2 needs to be different from that of 192.168.1.1.

However, in the actual production environment, the two Tomcat must use the same shared storage server, no matter which server provides services to users, users must receive the same page.

Let's go over the above configuration on the server of 192.168.1.2 and change the content of the test page of the server of 192.168.1.2, as follows:

[root@localhost src] # vim / web/webapp1/index.jsp JSP test1 page

3. Configure Nginx server (IP:192.168.1.1):

1. Install Nginx:

[root@localhost ~] # yum-y install pcre-devel zlib-devel openssl-devel # install dependency package [root@localhost ~] # useradd www-s / bin/false # create and run user [root@localhost media] # tar zxf nginx-1.12.0.tar.gz-C / usr/src # unpack [root@localhost media] # cd / usr/src/nginx-1.12.0/ # switch to this directory [root@localhost Nginx-1.12.0] #. / configure-- prefix=/usr/local/nginx-- user=www-- group=www-- with-file-aio-- with-http_stub_status_module-- with-http_gzip_static_module-- with-http_flv_module & & make & & make install # compilation and installation [root@localhost nginx-1.12.0] # vim / usr/local/nginx/conf/nginx.conf # Compile the master configuration file.. # gzip on; # navigate to the line and write the following four lines upstream tomcat_server {server 192.168.1.1 weight=1; 8080 weight=1; server 192.168.1.2 weight=1;} # write here the end # weight parameter indicates the weight, the higher the weight, the greater the probability of being assigned. # in order to test the effect obviously, the weight is set to the same server {listen 80; server_name localhost;.. Location / {root html; index index.html index.htm; proxy_pass http://tomcat_server; # navigates to the {} and writes in the line, "the name after http://" must be the same as the name after the upstream entry added above before scheduling can be implemented. }

2. Optimize the control of Nginx:

[root@localhost nginx-1.12.0] # ln-s / usr/local/nginx/sbin/nginx / usr/local/sbin/ create a link file for the main program [root@localhost ~] # vim / etc/init.d/nginx # Edit service script! / bin/bashchkconfig:-99 20PROG = "/ usr/local/nginx/sbin/nginx" PIDF= "/ usr/local/nginx/logs/nginx.pid" case "$1" instart) $PROG Stop) kill-s QUIT $(cat $PIDF);; restart) $0 stop$0 start;;reload) kill-s HUP $(cat $PIDF) ) echo "USAGE:$0 {start | stop | restart | reload}" exit 1esacexit 0 [root@localhost ~] # chmod + x / etc/init.d/nginx # add execution permission [root@localhost ~] # chkconfig-- add nginx # add as a system service [root@localhost nginx-1.12.0] # nginx- t # check if the main configuration file is incorrect nginx: the configuration file / usr/local/nginx/conf/ Nginx.conf syntax is oknginx: configuration file / usr/local/nginx/conf/nginx.conf test is successful [root@localhost ~] # systemctl start nginx # start the Nginx service To confirm the normal operation of the script [root@localhost ~] # netstat-anpt | grep nginx # to see if port 80 is listening. Tcp 0 0 0.0 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.

Fourth, access testing:

At this point, the deployment is complete, and now you use the client to access the Nginx server 192.168.1.1 test, and the results are as follows:

On the first visit, you will see the following interface:

Refresh the page and you will see the following interface

As you can see, we are visiting the Nginx server, and it is the Tomcat server that really handles the access request, and each access request is handled by a different Tomcat server, which indicates that the cloud load balancer cluster has been successfully built and can be switched between the two Tomcat server sites.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report