Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deploy Tomcat to realize load balancing configuration under centos 7

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

The following brings you how to deploy Tomcat to achieve load balancing configuration under centos 7, hoping to give you some help in practical application. Load balancing involves more things, there are not many theories, and there are many books on the Internet. Today, we will use the accumulated experience in the industry to do an answer.

Tomcat CVM is a free open source web application server, which is a lightweight application server. It is widely used in small and medium-sized systems and not many concurrent users. It is the first choice for developing and testing JSP programs. Generally speaking, although Tomcat has the same function of processing HTML pages as web servers such as apache or Nginx, because its ability to handle static pages is much lower than that of apache or Nginx, Tomcat generally runs on the backend as a servlet and JSP container, and the Tomcat application scenarios are as follows:

Users always access the apache/Nginx server, and then the apache/Nginx server is handed over to the Tomcat server for processing. All servers are connected to shared storage servers, so that users can access data the same every time. Apache/Nginx is used for scheduling, that is, the well-known load balancing, which is not explained much about load balancing.

In general, a Tomcat site may have a single point of failure and unable to cope with too many complex and diverse customer requests, so it can not be used in the production environment alone, so it is necessary to use load balancing to solve these problems.

Nginx is a very excellent http server software, which can support responses of up to 50000 concurrent connections, has strong static resource processing capacity, runs stably, and consumes very low memory, CPU and other system resources. At present, many large websites use Nginx server as the reverse proxy and load balancer of the back-end website program to improve the load concurrency ability of the whole site.

Start the preparatory work and set up the following environment. In order to simplify, the shared storage server will not be deployed. The environment is as follows:

First, prepare before deployment:

All three servers are deployed using centos7, and the software used in the deployment process is as follows:

Centos7 system image; Nginx and Tomcat source code packages, which can be downloaded from the official website or from the link I provided (packaged as an ISO image file): link: https://pan.baidu.com/s/1hQOG-e9aaW8V2kvbBSzIxg

Extraction code: 9pdv

2. Configure Tomcat server:

The purpose of this blog post is to achieve the final effect. For more information about the description and function of the Tomcat configuration file, you can refer to the blog post: https://blog.51cto.com/14154700/2412234.

1. Start deploying Tomcat on the 192.168.1.1 server (the configuration of the firewall is omitted here. Please configure the firewall to allow relevant traffic. I directly disabled the firewall here. The port number used by Tomcat by default is 8080nginx. The port number used by default is 80):

[root@localhost ~] # java-version # check whether JDK is installed, if not Install openjdk version "1.8.0mm 161" OpenJDK Runtime Environment (build 1.8.0_161-b14) OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode) [root@localhost media] # tar zxf apache-tomcat-8.5.16.tar.gz-C / usr/src # decompress Tomcat package [root@localhost media] # cd / usr/src/ [root@localhost src] # mv apache-tomcat-8.5.16/ / usr/local/tomcat8 # Tomcat without compilation After decompression, you can use [root@localhost src] # mkdir-p / web/webapp1 # to build the web site of Java. Used to store the website file [root@localhost src] # vim / web/webapp1/index.jsp # to create an index.jsp test page JSP test1 page [root@localhost src] # vim / usr/local/tomcat8/conf/server.xml # modify the main configuration file of Tomcat. . # navigate to this line, and then add the following two lines: the default directory of documents for docBase:web applications; # path= "set default" class; # reloadable settings to monitor whether the "class" changes; [root@localhost ~] # / usr/local/tomcat8/bin/startup.sh # start the service, and if you stop the service, just change the startup.sh to shutdown.sh. Using CATALINA_BASE: / usr/local/tomcat8Using CATALINA_HOME: / usr/local/tomcat8Using CATALINA_TMPDIR: / usr/local/tomcat8/tempUsing JRE_HOME: / usrUsing CLASSPATH: / usr/local/tomcat8/bin/usr/local/tomcat8/bin/tomcat-juli.jarTomcat started. [root@localhost src] # netstat-antp | grep 8080 # check whether the default port 8080 is listening. Tcp6 0: 8080:: * LISTEN 13220/java

Local test visit: 192.168.1.1purl 8080, and you can see the following test page:

At this point, the Tomcat of 192.168.1.1 has been configured, and the configuration of another Tomcat server 192.168.1.2 is exactly the same as that of 192.168.1.1. You can configure the above configuration on the 192.168.1.2 server once, but in order to see the effect of load balancing during the test, we can see that the server visited each time is not the same. The test page of the Tomcat server of 192.168.1.2 needs to be different from that of 192.168.1.1.

However, in the actual production environment, the two Tomcat must use the same shared storage server, no matter which server provides services to users, users must receive the same page.

Go through the above configuration on the server of 192.168.1.2 and change the content of the test page of the server of 192.168.1.2, as follows:

[root@localhost src] # vim / web/webapp1/index.jsp JSP test1 page

3. Configure Nginx server (IP:192.168.1.1):

1. Install Nginx:

[root@localhost ~] # yum-y install pcre-devel zlib-devel openssl-devel # install dependency package [root@localhost ~] # useradd www-s / bin/false # create and run user [root@localhost media] # tar zxf nginx-1.12.0.tar.gz-C / usr/src # unpack [root@localhost media] # cd / usr/src/nginx-1.12.0/ # switch to this directory [root@localhost Nginx-1.12.0] #. / configure-- prefix=/usr/local/nginx-- user=www-- group=www-- with-file-aio-- with-http_stub_status_module-- with-http_gzip_static_module-- with-http_flv_module & & make & & make install # compilation and installation [root@localhost nginx-1.12.0] # vim / usr/local/nginx/conf/nginx.conf # Compile the master configuration file.. # gzip on; # navigate to the line and write the following four lines upstream tomcat_server {server 192.168.1.1 weight=1; 8080 weight=1; server 192.168.1.2 weight=1;} # write here the end # weight parameter indicates the weight, the higher the weight, the greater the probability of being assigned. # in order to test the effect obviously, the weight is set to the same server {listen 80; server_name localhost;.. Location / {root html; index index.html index.htm; proxy_pass http://tomcat_server; # navigates to the {} and writes in the line, "the name after http://" must be the same as the name after the upstream entry added above before scheduling can be implemented. }

2. Optimize the control of Nginx:

[root@localhost nginx-1.12.0] # ln-s / usr/local/nginx/sbin/nginx / usr/local/sbin/ # create a link file for the main program [root@localhost ~] # vim / etc/init.d/nginx # Edit service script #! / bin/bash# chkconfig:-99 20PROG = "/ usr/local/nginx/sbin/nginx" PIDF= "/ usr/local/nginx/logs/nginx.pid" case "$1" in start) $PROG ; stop) kill-s QUIT $(cat $PIDF);; restart) $0 stop $0 start;; reload) kill-s HUP $(cat $PIDF) *) echo "USAGE:$0 {start | stop | restart | reload}" exit 1esacexit 0 [root@localhost ~] # chmod + x / etc/init.d/nginx # add execution permission [root@localhost ~] # chkconfig-- add nginx # add as a system service [root@localhost nginx-1.12.0] # nginx- t # check if the main configuration file is incorrect nginx: the configuration File / usr/local/nginx/conf/nginx.conf syntax is oknginx: configuration file / usr/local/nginx/conf/nginx.conf test is successful [root@localhost ~] # systemctl start nginx # start the Nginx service To confirm the normal operation of the script [root@localhost ~] # netstat-anpt | grep nginx # to see if port 80 is listening. Tcp 0 0 0.0 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.

Fourth, access testing:

At this point, the deployment is complete, and now you use the client to access the Nginx server 192.168.1.1 test, and the results are as follows:

On the first visit, you will see the following interface:

Refresh the page and you will see the following interface

As you can see, we are accessing the Nginx server, and it is the Tomcat server that really handles the access request, and each access request is handled by a different Tomcat server, so the effect is obvious.

Fifth, write at the end: minor problems encountered in the deployment process:

After changing the configuration file of the Tomcat server, when you access the test, you can see that you are still visiting the default page that comes with Tomcat. You are a little confused. After trying to start and stop the Tomcat service several times with the following command, you will be fine:

[root@localhost webapp1] # / usr/local/tomcat8/bin/shutdown.sh # stop service Using CATALINA_BASE: / usr/local/tomcat8Using CATALINA_HOME: / usr/local/tomcat8Using CATALINA_TMPDIR: / usr/local/tomcat8/tempUsing JRE_HOME: / usrUsing CLASSPATH:/usr/local/tomcat8/bin/bootstrap.jar:/usr/local/tomcat8/bin/tomcat-juli.jar [root@localhost webapp1] # / Usr/local/tomcat8/bin/startup.sh # start the service Using CATALINA_BASE: / usr/local/tomcat8Using CATALINA_HOME: / usr/local/tomcat8Using CATALINA_TMPDIR: / usr/local/tomcat8/tempUsing JRE_HOME: / usrUsing CLASSPATH:/usr/local/tomcat8/bin/bootstrap.jar:/usr/local/tomcat8/bin/tomcat-juli.jarTomcat started.

It may be the problem of starting and stopping this service. Without delving into it, this service is quite different from other services:

After the Nginx service is stopped, the port number cannot be found, as follows:

[root@localhost ~] # systemctl stop nginx [root@localhost ~] # netstat-anpt | grep nginx # after the Nginx service stops, nothing can be found [root@localhost ~] #

After the Tomcat service is stopped, check the port number and find that the status is as follows (the Listen status at startup becomes TIME_WAIT status). After waiting for a while, the relevant information cannot be found:

[root@localhost webapp1] # netstat-antp | grep 8080tcp6 0 0:: 1 TIME_WAIT 56448:: 1 TIME_WAIT

After reading the above about how to deploy Tomcat to achieve load balancing configuration under centos 7, if there is anything else you need to know, you can find out what you are interested in in the industry information or find our professional and technical engineers for answers. Technical engineers have more than ten years of experience in the industry.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report