Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Build a redis cache server with pictures and texts in super detail (nginx+tomcat+redis+mysql implements session session sharing)

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Blogger QQ:819594300

Blog address: http://zpf666.blog.51cto.com/

Friends who have any questions can contact the blogger, the blogger will help you answer, thank you for your support!

1. Introduction of redis

Redis is a key-value storage system. Similar to Memcached, it supports relatively more value types, including string (string), list (linked list), set (collection), zset (sorted set-ordered collection), and hash (hash type). Like memcached, data is cached in memory for efficiency. The difference is that redis will periodically write updated data to disk or modify operations to additional record files, and on this basis to achieve master-slave (master-slave) synchronization.

Redis is a high-performance key-value database. The emergence of redis makes up for the deficiency of key/value storage such as memcached to a great extent, and can play a good complementary role to relational database in some cases. It provides clients such as Java,C/C++,C#,PHP,JavaScript,Perl,Object-C,Python,Ruby, which is easy to use.

If you simply compare the difference between Redis and Memcached, there are basically the following three points:

1. Redis not only supports simple KBH data, but also provides storage of data structures such as list,set,zset,hash.

2. Redis supports data backup, that is, data backup in master-slave mode.

3. Redis supports data persistence. You can keep the data in memory on disk and load it again when you restart it.

In Redis, not all data is stored in memory all the time. This is the biggest difference compared with Memcached. Redis only caches all key information. If Redis finds that the memory usage exceeds a certain threshold, it triggers the operation of swap, and Redis calculates which value corresponding to key needs to be swap to disk based on "swappability = age*log (size_in_memory)". The value corresponding to these key is then persisted to disk and cleared in memory. This feature allows Redis to maintain more data than the memory of its machine itself. Of course, the machine's own memory must be able to maintain all the key, because the data will not be swap operations.

When reading data from the Redis, if the value corresponding to the read key is not in memory, then the Redis needs to load the data from the swap file and then return it to the requester.

Comparison between memcached and redis

1. Network IO model

Memcached is a multithreaded, non-blocking IO reuse network model, which is divided into listening main thread and worker sub-thread. The listening thread listens to the network connection, and after receiving the request, the connection description word pipe is passed to the worker thread to read and write IO. The network layer uses the event library encapsulated by libevent, and the multi-thread model can play a multi-core role.

Redis uses a single-threaded IO reuse model and encapsulates a simple AeEvent event handling framework, which mainly implements epoll, kqueue and select. For simple IO operations, single thread can maximize the speed advantage, but Redis also provides some simple computing functions, such as sorting, aggregation, etc., for these operations, the single-thread model will seriously affect the overall throughput. In the process of CPU computing, The entire IO dispatch is blocked.

2. Memory management

Memcached uses pre-allocated memory pools, uses slab and different sizes of chunk to manage memory, and value selects the appropriate chunk storage according to the size. Redis uses on-site memory requests to store data.

3. Storage mode and other aspects

Memcached basically only supports simple key-value storage, and does not support functions such as persistence and replication. Redis supports many data structures, such as list,set,sortedset,hash, in addition to key/value.

Second, how to maintain a session session

At present, in order to enable web to adapt to large-scale access, it is necessary to implement the cluster deployment of applications. The most effective solution of the cluster is load balancing, and every request of load balancing users may be assigned to a non-fixed server, so we must first solve the unity of session to ensure the normal use of users no matter which server their requests are forwarded to, that is, we need to implement the sharing mechanism of session.

There are several solutions to achieve session unification under the cluster system:

1. Precise location of requests: sessionsticky, for example, the hash policy based on accessing ip, that is, all current user requests are located in one server, so that a single server stores the user's session login information. If it goes down, it is equivalent to a single point of deployment, which will be lost and the session will not be replicated.

2. Session replication sharing: sessionreplication, for example, tomcat comes with session sharing, which mainly refers to synchronizing session among multiple application servers in a cluster environment to make session consistent and transparent. If one of the servers fails, according to the principle of load balancing, the scheduler will traverse to find available nodes and distribute requests. Because the session has been synchronized, it can ensure that the user's session information will not be lost, session replication.

Inadequacies of this scheme:

Must be done between the same kind of middleware (e.g. between tomcat-tomcat).

The performance loss caused by session replication will increase rapidly. Especially when large objects are saved in session, and the objects change rapidly, the performance degradation is more significant, which will consume the system performance. This feature limits the horizontal expansion of web applications.

Session content is synchronized to members through broadcasting, which will cause network traffic bottlenecks, even intranet bottlenecks. The performance is not good under the big concurrency.

3. Session sharing based on cacheDB cache

Session sharing based on memcache/redis caching

Even if the session information is accessed by cacheDB, the application server accepts the new request to save the session information in the cacheDB. When the application server fails, the scheduler will traverse to find the available node and distribute the request. When the application server finds that the session is not in the local memory, it looks for it in the cacheDB, and if it finds it, it copies it to the local machine, so as to achieve session sharing and high availability.

Third, nginx+tomcat+redis realizes load balancing and session sharing.

1. Experimental environment

Mainframe

Operating system

IP address

Nginx (4-core CPU)

Centos7.2

192.168.1.9

Tomcat-1

192.168.1.11

Tomcat-2

192.168.1.12

Mysql

192.168.1.10

Redis

192.168.1.13

2. Virtual machine environment diagram:

3. Experimental topology

Note: in this figure, nginx acts as a reverse proxy to achieve static and dynamic separation, randomly assigning customer dynamic requests to two tomcat servers according to weight, redis as a shared session data server for two tomcat, and mysql as a back-end database for two tomcat.

4. Nginx installation and configuration

Note: Nginx is used as the load balancer of Tomcat. The session Session data of Tomcat is stored in Redis, which can achieve the 7x24 effect of zero downtime. Because the session is stored in Redis, Nginx does not have to be configured to stick paste a certain Tomcat, which can really achieve multiple Tomcat load balancing in the background.

Here is the official start of the installation of nginx:

① installs zlib-devel, pcre-devel and other dependent packages

Note: combine proxy and upstream modules to achieve back-end web load balancing

Combine the default ngx_http_proxy_module module and ngx_http_upstream_module module of nginx to realize the health check of the back-end server.

Proxy: implement reverse proxy

Upstream: achieving load balancing

② creates nginx program users

③ compilation and installation of nginx

The content in the figure is as follows:

. / configure--prefix=/usr/local/nginx1.10-user=www-- group=www--with-http_stub_status_module-- with-http_realip_module-- with-http_ssl_module--with-http_gzip_static_module-- with-pcre-- with-http_flv_module & & make & & make install

④ optimizes the execution path of nginx programs

⑤ nginx syntax detection

⑥ writes nginx service script

The script reads as follows:

#! / bin/bash

# nginx Startupscript for the Nginx HTTP Server

# chkconfig:-85 15

# pidfile:/usr/local/nginx1.10/logs/nginx.pid

# config:/usr/local/nginx1.10/conf/nginx.conf

Nginxd=/usr/local/nginx1.10/sbin/nginx

Nginx_config=/usr/local/nginx1.10/conf/nginx.conf

Nginx_pid=/usr/local/nginx1.10/logs/nginx.pid

RETVAL=0

Prog= "nginx"

# Sourcefunction library.

. / etc/rc.d/init.d/functions

# Start nginxdaemons functions.

Start () {

If [- f$nginx_pid]; then

Echo "nginx already running...."

Exit 1

Fi

Echo-n "Starting$prog:"

$nginxd-c ${nginx_config}

RETVAL=$?

[$RETVAL = 0] & & touch / var/lock/subsys/nginx

}

# Stop nginxdaemons functions.

Stop () {

Echo-n "Stopping $prog:"

$nginxd-s stop

RETVAL=$?

[$RETVAL = 0] & & rm-f / var/lock/subsys/nginx

}

# reloadnginxservice functions.

Reload () {

Echo-n "Reloading $prog:"

$nginxd-s reload

}

# statusngnxservice functions

Status () {

If [- f$nginx_pid]; then

Echo "$prog is running"

Else

Echo "$prog is stop"

Fi

}

Case "$1" in

Start)

Start

Stop)

Stop

Reload)

Reload

Restart)

Stop

Start

Status)

Status

*)

Echo "Usage: $prog {start | stop | restart | reload | status}"

Exit 1

Esac

⑧ starts the nginx service

As you can see from the figure above, nginx failed to start through the service script. The solution is as follows:

⑨ configure nginx reverse proxy: function is (reverse proxy + load balancing + health detection)

Modify the nginx main configuration file:

The configuration file is as follows:

User www www

Worker_processes 4

Worker_cpu_affinity0001 0010 0100 1000

Error_log logs/error.log

# error_log logs/error.log notice

# error_log logs/error.log info

Worker_rlimit_nofile10240

Pid logs/nginx.pid

Events {

Use epoll

Worker_connections 4096

}

Http {

Includemime.types

Default_type application/octet-stream

Log_format main'$remote_addr-$remote_user [$time_local] "$request"'

'$status $body_bytes_sent "$http_referer"'

'"$http_user_agent"$http_x_forwarded_for"'

Access_log logs/access.log main

Server_tokensoff

Sendfile on

Tcp_nopush on

# keepalive_timeout 0

Keepalive_timeout 65

# Compression Settings

Gzipon

Gzip_comp_level6

Gzip_http_version1.1

Gzip_proxiedany

Gzip_min_length2k

Gzip_buffers16 8k

Gzip_typestext/plain text/css text/javascript application/json application/javascript application/x-javascriptapplication/xml

Gzip_varyon

# end gzip

# http_proxy Settings

Client_max_body_size 10m

Client_body_buffer_size 128k

Proxy_connect_timeout 75

Proxy_send_timeout 75

Proxy_read_timeout 75

Proxy_buffer_size 4k

Proxy_buffers 4 32k

Proxy_busy_buffers_size 64k

Proxy_temp_file_write_size 64k

# load balance Settings

Upstreambackend_tomcat {

Server192.168.1.11:8080 weight=1 max_fails=2 fail_timeout=10s

Server192.168.1.12:8080 weight=1 max_fails=2 fail_timeout=10s

}

# virtual host Settings

Server {

Listen 80

Server_name www.benet.com

Charsetutf-8

Location/ {

Roothtml

Index index.jsp index.html index.htm

}

Location~*\. (jsp | do) ${

Proxy_pass http://backend_tomcat;

Proxy_redirectoff

Proxy_set_headerHost $host

Proxy_set_headerX-Real-IP $remote_addr

Proxy_set_headerX-Forwarded-For $proxy_add_x_forwarded_for

Proxy_next_upstreamerror timeout invalid_header http_500 http_502 http_503 http_504

}

Location/nginx_status {

Stub_statuson

Access_logoff

Allow192.168.1.0/24

Denyall

}

}

}

Restart to make it effective:

⑩ configure firewall rules

5. Install and deploy tomcat application server

1) install JDK on the tomcat-1 and tomcat-2 nodes

Description: before installing tomcat, you must install the full name of JDK,JDK is java development kit, which is a free java language software development kit provided by sun, which contains the java virtual machine (JVM). The prepared java source programs can be compiled to form java bytecode, as long as JDK is installed, you can use JVM to interpret these bytecode files, thus ensuring the cross-platform nature of java.

① installs JDK and configures the java environment:

Extract the jdk-7u65-linux-x64.gz:

Move the extracted jdk1.7.0_65 directory to / usr/local/ and rename it to java:

② adds the following to the / etc/profile file

The content in the figure is as follows:

ExportJAVA_HOME=/usr/local/java

ExportPATH=$JAVA_HOME/bin:$PATH

③ executes the profile file through the source command to make it effective

④ to see if the environment variable is in effect

⑤ runs the java-version command on tomcat1 to see if the java version is the same as previously installed

⑥ also installs JDK on tomcat2 in the same way as ① ~ ⑤

The screenshot is omitted here, and the steps are exactly the same.

At this point, the java environment is configured.

2) install and configure tomcat on tomcat-1 and tomcat-2 nodes

① unzips apache-tomcat-7.0.54.tar.gz package

② moved the unzipped folder to / usr/local/ and renamed it tomcat7

③ configures tomcat environment variables

The content in the figure is as follows:

ExportJAVA_HOME=/usr/local/java

Export CATALINA_HOME=/usr/local/tomcat7

ExportPATH=$JAVA_HOME/bin:$CATALINA_HOME/bin:$PATH

④ executes the profile file through the source command to make it effective

⑤ to see if the environment variable is in effect

⑥ views the version information of tomcat

⑦ starts tomcat

Note: the path to startup.sh is: / usr/local/tomcat7/bin/startup.s

⑧ Tomcat runs on port 8080 by default. Run the netstat command to view the listening information on port 8080.

Note: Port 8009 is specially designed for use with apache. When apache is used as the front-end proxy server, the received request is forwarded to port 8009 of tomcat. In this experiment, port 8009 is not needed.

Port 8080: the port through which tomcat listens for client requests. It is also used for customer requests forwarded to tomcat when nginx acts as a front-end proxy server.

Port 8005: the port that stops tomcat, which is not needed in this lab.

⑨ Firewall Rule configuration

⑩ also installs tomcat on tomcat2 and does related operations in the same way as ① ~ ⑨. The screenshot is omitted here, and the steps are exactly the same.

Open a browser to test access to tomcat-1 and tomcat-2, respectively:

Expand knowledge points:

Run the / usr/local/tomcat7/bin/shutdown.sh command if you want to shut down tomcat.

3) well, you can see that the visit was successful, which means that our tomcat installation is complete.

Let's modify the main configuration file.

one

② sets the default virtual host and adds jvmRoute

Note: jvmRoute is the jvm logo, that is, the label at the top of the page. In the actual production environment, all backend tomcat logos should be the same. Here, for the demonstration of the experiment, my two tomcat logos are changed to different ones. It will be easy to verify later for verification.

Here tomcat1 I will set the logo to tomcat-1.

③ modifies the default virtual host, points the website file path to / web/webapp1, and adds a context segment to the host section

Description: Context: is the context, is also a class, this class encapsulates each user session, the current HTTP request, the requested page and other information. The purpose is to provide access to the entire current context, including the request object. You can use this class to share information between pages.

DocBase: specify the file path of the Web application (that is, the actual directory of your application), which can be given an absolute path or a relative path relative to the appBase attribute. If the Web application adopts an open directory structure, specify the root directory of the Web application, and if the Web application is a war file, specify the path of the war file. (specify the address of the project)

Path: specify the URL entry to access the Web application (that is, set an alias for the physical path followed by docBase)

Reloadable: if this property is set to true,tomcat server to monitor changes to class files in the WEB-INF/classes and WEB-INF/lib directories while running, and if class files are detected to be updated, the server will automatically reload the Web application.

To put it popularly: now that we need to customize a website directory in tomcat, we need to configure a virtual directory, that is, the context Comtext field, to act as a connecting link between the preceding and the following.

④ adds document directory and test files

The index.jsp content is as follows:

Tomcat-1

Session serviced by tomcat

SessionID

Createdon

⑤ stops tomcat, checks the configuration file, and starts tomcat

⑥ repeats the operation of ① ~ ⑤ on tomcat2, only jvmRoute is different from tomcat1. In addition, in order to distinguish which node provides access, the title of the test page is also different (the web content provided by the two tomcat servers in the production environment is the same). All other configurations are the same.

Screenshots of different points and identical points are as follows: (× × × regions are different points, others are the same)

Access the nginx host with a browser to verify the load balancer:

Results of the first visit:

Results of the second visit:

Note: as can be seen from the above results, nginx distributes the access requests to the backend tomcat1 and tomcat2 respectively, and the client access requests achieve load balancing, but the session id is different (that is, session retention is not implemented), which will cause great pressure on the back-end servers.

Let's verify the health check:

Description: shut down a tomcat host and test the access with a client browser.

First turn off tomcat1's tomcat service:

Verify:

No matter how you refresh the page, tomcat2 always provides the service, indicating that the health check worked and verified successfully.

Here are the key points of our blog:

Configure tomcat to implement session persistence through redis

1) install redis and start the service

Download redis source code for ①:

Wget http://download.redis.io/releases/redis-3.2.3.tar.gz

② decompress and install redis

The following is some information that appears during the installation process, which you can take a look at:

Figure (1)

Figure (2)

Description: from the figure above, we can easily see that redis is installed in the / usr/local,/usr/local/bin,/usr/local/share,/usr/local/include,/usr/local/lib,/usr/local/share/man directory.

③ then changes to the utils directory and executes the redis initialization script install_server.sh

Description: I use all the default values here. All enter the car by default.

Description: through the above installation process, we can see that the redis configuration file after redis initialization is

/ etc/redis/6379.conf, the log file is / var/log/redis_6379.log, the data file dump.rdb is stored in / var/lib/redis/6379 directory, and the startup script is / etc/init.d/redis_6379.

④ now we are going to use systemd, so create a unit file under / etc/systems/system named redis_6379.service

The contents are as follows:

[Unit]

Description=Redison port 6379

[Service]

Type=forking

ExecStart=/etc/init.d/redis_6379start

ExecStop=/etc/init.d/redis_6379stop

[Install]

WantedBy=multi-user.target

Note: here Type=forking is the form of background operation.

⑤ starts redis

As we can see from the figure above, the service status is dead. The solution is to restart the service, as shown in the following figure:

As you can see from the figure above, redis is listening on port 6379 of the local loopback address by default.

⑥ Firewall Rule Settings

⑦ uses the redis-cli-version command to view the redis version

Note: by displaying the results, we can see that the redis version is 3.2.3.

To this source mode to install redis will be introduced.

2) after redis is installed, we will configure redis.

① cannot communicate because 127.0.0.1, and redis listens to the local loopback address by default, so we need to set the address that redis listens to and add ip that listens to redis hosts. At the same time, for security, we need to enable the requirepass parameter of redis's password authentication feature. Redis defaults to empty password access, which is very insecure.

The final redis configuration file is as follows:

Bind 127.0.0.1 192.168.1.13

Protected-mode yes

Port 6379

Tcp-backlog 511

Timeout 0

Tcp-keepalive 300

Daemonize yes

Supervised no

Pidfile / var/run/redis_6379.pid

Loglevel notice

Logfile / var/log/redis_6379.log

Databases 16

Save 900 1

Save 300 10

Save 60 10000

Stop-writes-on-bgsave-error yes

Rdbcompression yes

Rdbchecksum yes

Dbfilename dump.rdb

Dir / var/lib/redis/6379

Slave-serve-stale-data yes

Slave-read-only yes

Repl-diskless-sync no

Repl-diskless-sync-delay 5

Repl-disable-tcp-nodelay no

Slave-priority 100

Requirepass pwd@123

Appendonly no

Appendfilename "appendonly.aof"

Appendfsync everysec

No-appendfsync-on-rewrite no

Auto-aof-rewrite-percentage 100

Auto-aof-rewrite-min-size 64mb

Aof-load-truncated yes

Lua-time-limit 5000

Slowlog-log-slower-than 10000

Slowlog-max-len 128

Latency-monitor-threshold 0

Notify-keyspace-events ""

Hash-max-ziplist-entries 512

Hash-max-ziplist-value 64

List-max-ziplist-size-2

List-compress-depth 0

Set-max-intset-entries 512

Zset-max-ziplist-entries 128

Zset-max-ziplist-value 64

Hll-sparse-max-bytes 3000

Activerehashing yes

Client-output-buffer-limit normal 0 0 0

Client-output-buffer-limit slave 256mb 64mb 60

Client-output-buffer-limit pubsub 32mb 8mb 60

Hz 10

Aof-rewrite-incremental-fsync yes

② restarts redis service

After the ③ redis configuration file is configured, let's start redis and do a simple operation

Description: parameter interpretation of redis-cli-h 192.168.1.13-p 6379-a pwd@123.

This command says to connect to the redis server, IP is 192.168.1.13, port is 6379, and the password is pwd@123.

Keys* is to view all the key-value pairs of redis.

Setname dabiaoge adds a key value of name and the content is dabiaoge.

Getname looks at the contents of the key value name.

The command of redis is used. That's all we'll introduce for the time being.

3) configure tomcatsession redis synchronization

① downloads the corresponding jar package for tomcat-redis-session-manager. There are three main packages:

Tomcat-redis-session-manage-tomcat7.jar

Jedis-2.5.2.jar

Commons-pool2-2.2.jar

Copy the ② to $TOMCAT_HOME/lib after the download is completed

③ modifies the context.xml of tomcat:

The content in the figure is as follows:

④ restarts the tomcat service

⑤ repeats the operation of the ① ~ ④ step on tomcat2.

Access the http://192.168.1.9/index.jsp test page through a browser

Refresh the page:

Description: it can be seen that different tomcat are visited, but the session is the same, indicating that the purpose of the cluster has been achieved.

Note: starting from Tomcat6, the Session persistence setting is enabled by default. You can disable local Session persistence during testing. In fact, it is also very simple. In the context.xml file under the conf directory of Tomcat, uncomment the following configuration:

Before modification:

After modification:

Restart the tomcat service after modification.

View redis:

Description: as you can see from the figure above, a key-value pair is established for session session persistence on tomcat2.

Configure tomcat to connect to the database

Note: tomcat's session session persistence solved this problem through redis, and now it's time to solve the tomcat connection database problem.

192.168.1.10 as the mysql database server, start the following experiment:

① enters mysql interactive mode of mysql database

② establishes authorized users, databases, create tables and insert data, and query validation

③ mysql Firewall configuration

④ configure tomcat server to connect to mysql database

Download mysql-connector-java-5.1.22-bin.jar and copy it to the $CATALINA_HOME/lib directory

⑤ context configuration

Configure the JNDI data source in tomcat by adding a resource declaration to your context

Add the following in:

The content in the figure is as follows:

⑥ creates a new directory under the / web/webapp1/ root directory, which is used to store the website xml configuration files and to connect to the mysql database by tomcat.

The content is added as follows:

MySQLTest App

DBConnection

Jdbc/TestDB

Javax.sql.DataSource

Container

⑦ restarts the tomcat service

⑧ repeats the ④ ~ ⑦ step operation on tomcat2.

⑨ test code

Now create a simple test .jsp page to test the connectivity of tomcat and mysql

The contents are as follows:

MySQL

ConnectMySQL

⑩ repeats ⑨ on tomcat2

As you can see from the figure above, tomcat can now connect to the database.

Note:

You can refer to tomcat docs for all the above configurations.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report