Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Set up memcache cache server (nginx+php+memcache+mysql) in super detail with pictures and texts

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Blogger QQ:819594300

Blog address: http://zpf666.blog.51cto.com/

Friends who have any questions can contact the blogger, the blogger will help you answer, thank you for your support!

I. brief introduction of MemCache

Session

MemCache is a free, open source, high-performance, distributed distributed memory object caching system, which is used for dynamic Web applications to reduce the load of the database. It reduces the number of times to read the database by caching data and objects in memory, thus improving the speed of website access. MemCaChe is a HashMap that stores key-value pairs. Key-value storage is used in memory for any data (such as strings, objects, etc.). The data can come from database calls, API calls, or the results of page rendering. The design concept of MemCache is small and powerful, and its simple design promotes rapid deployment, easy development and solving many problems facing large-scale data cache. the open API enables MemCache to be used in most popular programming languages such as Java, C/C++/C#, Perl, Python, PHP, Ruby and so on.

In addition, why are there two names, Memcache and memcached? In fact, Memcache is the name of the project, and memcached is the file name of its server-side main program.

The official website of MemCache is http://memcached.org/.

MemCache access model

In order to deepen the understanding of memcache, the distributed cache represented by memcache is accessed as follows:

In particular, to clarify a problem, although MemCache is called "distributed cache", MemCache itself does not have distributed function at all, and MemCache clusters will not communicate with each other (in contrast, for example, JBoss Cache, when a server has cache data updates, it will notify other machines in the cluster to update the cache or clear the cached data). The so-called "distributed" depends entirely on the implementation of the client program. Just like the process in the picture above.

At the same time, based on this diagram, take a look at the process of writing cache once to MemCache:

1. The application enters the data that needs to be cached by write.

2. API inputs Key into the routing algorithm module, and the routing algorithm gets a server number according to the list of Key and MemCache cluster servers.

3. Get the MemCache and its ip address and port number from the server number

4. API calls the communication module to communicate with the server with the specified number, writes data to the server and completes a write operation of distributed cache.

Read cache is the same as write cache, as long as the same routing algorithm and server list are used, as long as the application queries the same Key,MemCache client, the same client always accesses the same client to read the data, and as long as the data is still cached in the server, the cache hit can be guaranteed.

This way of MemCache clustering is also considered from the aspect of partition fault tolerance. If Node2 goes down, the data stored on Node2 will not be available. Because Node0 and Node1 still exist in the cluster, the next time you request the key value stored in Node2, it will definitely miss. At this time, you first get the data to be cached from the database, and then the routing algorithm module selects a node in Node0 and Node1 according to the key value. Put the corresponding data in, so that you can cache it again next time. This kind of clustering approach is very good, but the disadvantage is that the cost is relatively high.

Consistent Hash algorithm

From the above figure, we can see a very important problem, that is, the routing algorithm is very important for the management of the server cluster, just like the load balancing algorithm, the routing algorithm determines which server in the cluster should be accessed. Let's start with a simple routing algorithm.

1. Remainder Hash

A simple routing algorithm can use the remainder Hash: divide the number of servers by the hash value of the cached data KEY, and the remainder is the subscript number of the server list. If the HashCode corresponding to a str is 52 and the number of servers is 3, take the remainder to get the node Node1 corresponding to str, so the routing algorithm routes the str to the Node1 server. Because of the strong randomness of HashCode, the use of remainder Hash routing algorithm can ensure a more balanced distribution of cached data in the whole MemCache server cluster.

If the scalability of the server cluster is not considered, then the remainder Hash algorithm can almost meet most of the cache routing requirements, but when the distributed cache cluster needs to be expanded, it will be difficult.

Let's assume that the MemCache server cluster changes from 3 to 4. Change the server list and still use the remainder of Hash,52 to 4 is 0, corresponding to Node0, but str originally exists on Node1, which causes the cache to miss. For another example, it turns out that there are 20 data with a HashCode of 0,19, so:

So, for example, there are 20 data with a HashCode of 0,19, so:

Now the capacity is expanded to 4, and the bold red indicates a hit:

If the capacity is expanded to 20 +, only the Key corresponding to the first three HashCode is hit, that is, 15%. Of course, the reality is certainly much more complicated than this, but it is enough to show that the routing algorithm using the remainder Hash will cause a large number of data to miss correctly when expanding the capacity (in fact, not only miss, but also a large number of missed data will occupy memory in the original cache before being removed). In the website business, most of the business data operation requests are actually obtained through the cache, and only a small number of read operations will access the database, so the load capacity of the database is designed on the premise of cache. When most of the cached data can not be read correctly because of the expansion of the server, the pressure of these data access will fall on the database, which will greatly exceed the load capacity of the database, which may seriously lead to database downtime.

There is a solution to this problem, and the steps to solve it are:

(1) when the number of visits to the website is low, usually late at night, the technical team will work overtime, expand capacity and restart the server.

(2) warm up the cache gradually by simulating the request to redistribute the data in the cache server

2. Consistent Hash algorithm

The consistent Hash algorithm implements the Hash mapping from the Key to the cache server through a data structure called the consistent Hash ring. To put it simply, a consistent hash organizes the entire hash space into a virtual ring (this ring is called a consistent Hash ring). For example, assuming that the value space of a space hash function H is 0 ~ 2 ^ 32-1 (that is, the hash value is a 32-bit unsigned × ×), the whole hash space is as follows:

Next, each server uses H for a hash calculation. Specifically, you can use the server's IP address or host name as the keyword, so that each machine can determine its position on the above hash ring and arrange it clockwise. Here, we assume that the calculated locations of the three nodes memcache are as follows:

Next, the hash value h of the data is calculated using the same algorithm, and the position of the data on the hash ring is determined.

If we have data A, B, C, D, 4 objects, the position after hashing is as follows:

According to the consistent hash algorithm, data An is bound to server01, D is bound to server02, and B and C are on server03 according to the clockwise method of finding the nearest service node.

The hash ring scheduling method thus obtained has high fault tolerance and scalability:

Suppose server03 is down:

You can see that C and B are affected at this time, and B and C are relocated to Server01. In general, in a consistent hash algorithm, if a server is not available, the data affected is only the data between this server and the previous server in its ring space (that is, the first server encountered walking counterclockwise). The rest will not be affected.

Consider another situation, if we add a server Memcached Server 04 to the system:

At this point, A, D, and C are not affected, only B needs to be relocated to the new Server04. In general, in the consistent hashing algorithm, if you add a server, the data affected is only the data between the new server and the previous server in its ring space (that is, the first server encountered when walking counterclockwise). Nothing else will be affected.

To sum up, the consistent hashing algorithm only needs to relocate a small part of the data in the ring space for the increase or decrease of nodes, so it has good fault tolerance and scalability.

The disadvantage of consistent hash: when there are too few service nodes, it is easy to cause the problem of data skew due to the uneven division of nodes. We can solve the problem by adding virtual nodes.

More importantly, the more cache server nodes in the cluster, the smaller the impact of adding / decreasing nodes, which is easy to understand. In other words, with the increase of the size of the cluster, the probability of continuing to hit the original cached data will increase. Although there is still a small amount of data cached in the server that cannot be read, the proportion is small enough that even if the database is accessed, it will not cause fatal load pressure on the database.

Principle of MemCache implementation

First of all, let's make it clear that MemCache's data is stored in memory.

1. The speed of accessing data is faster than that of traditional relational databases, because traditional relational databases such as Oracle and MySQL store data on hard disk and IO operation speed is slow in order to keep the data persistent.

2. The data of MemCache is stored in memory, which means that as long as MemCache is restarted, the data will disappear.

3. Since the data of MemCache is stored in memory, it is bound to be limited by the number of machine bits. 32-bit machines can only use 2GB memory space at most, while 64-bit machines can be considered to have no upper limit.

Then let's take a look at the principle of MemCache. The most important thing for MemCache is how to allocate memory. MemCache uses a fixed space allocation method, as shown in the following figure:

This picture involves four concepts: slab_class, slab, page, and chunk. The relationship between them is:

1. MemCache divides the memory space into a set of slab

2. There are several page under each slab, and each page defaults to 1m. If a slab occupies 100m of memory, then there should be 100m page under this slab.

3. Each page contains a set of chunk,chunk, where the real data is stored, and the size of the chunk in the same slab is fixed.

4. Slab with the same size chunk is organized together, which is called slab_class

The way MemCache memory is allocated is called allocator (allocation Operation). The number of slab is limited, several, a dozen, or dozens, which is related to the configuration of startup parameters.

The storage place of the value in MemCache is determined by the size of the value, and the value is always stored in a slab that is closest to the size of the chunk. For example, the chunk size of slab [1] is 80 bytes, the chunk size of slab [2] is 100 bytes, and the chunk size of slab [3] is 125 bytes. (the chunk in the adjacent slab is basically increased by 1.25, which can be specified by-f when MemCache starts), then come with an 88-byte value. This value will be placed in slab 2. When you put a slab, the slab first needs to apply for memory, which is in page, so when you put in the first data, no matter what the size is, 1m of page will be allocated to the slab. After applying to the page, slab splits the memory of the page according to the size of the chunk, which becomes a chunk array, and finally selects one of the chunk array to store the data.

What if there is no chunk to allocate in the slab? if the MemCache startup does not append-M (disable LRU, in which case insufficient memory will report an Out Of Memory error), then MemCache will clean up the data from the least recently used chunk in the slab and put the latest data.

The workflow of Memcache:

1. Check whether the request data of the client is in the memcached. If so, return the request data directly without any operation on the database, and the path is treated as ①②③⑦.

2. If the requested data is not in the memcached, check the database, return the data obtained from the database to the client, and cache a copy of the data to the memcached (the memcached client is not responsible and needs to be explicitly implemented by the program), and the path is operated as ①②④⑤⑦⑥.

3. Update the data in memcached every time you update the database to ensure consistency.

4. When the memory space allocated to memcached is used up, the LRU (Least Recently Used, least recently used) policy plus expiration policy is used. The invalid data is replaced first, and then the recently unused data is replaced.

Memcached characteristics:

The protocol is simple:

It is a text-line-based protocol that can access data directly through telnet on the memcached server.

Note: the protocol of the text line: refers to the transmission of information by text, and the transmission of a line feed after the transmission of an information unit. For example, for HTTP's GET request, GET/index.html HTTP/1.1 is a line, followed by a line for each header. A blank line indicates the end of the entire request

Based on libevent event handling:

Libevent is a set of program library developed by C #, which encapsulates the event handling functions such as epoll of kqueue,Linux system of BSD system into an interface, which improves the performance compared with traditional select.

Built-in memory management:

All the data is stored in memory, which is faster than the hard disk. When the memory is full, the unused cache is automatically deleted by the LRU algorithm, but the disaster recovery of the data is not taken into account. Restart the service, and all data will be lost.

Distributed system

Each memcached server does not communicate with each other, accesses data independently and does not share any information. The server does not have distributed capabilities, and distributed deployment depends on the memcache client.

Installation of Memcache

There are two processes: the installation of memcache server and the installation of memcached client.

The so-called server-side installation is to install Memcache on the server (usually the linux system) to realize the storage of data.

The so-called client installation refers to php (or other programs, Memcache and other good api interfaces) to use the data provided by the server-side Memcache, which requires php to add extensions.

Memcache of PHP

II. Centos7.2+nginx+php+memcache+mysql

Environment description:

Nginx and php:

Required software: nginx-1.10.2.tar.gz

Php-5.6.27.tar.gz

Ip address: 192.168.1.8

Mysql:

Required software: mysql-5.7.13.tar.gz

Ip address: 192.168.1.9

Memcache:

Required software: memcached-1.4.33.tar.gz

Ip address: 192.168.1.10

The virtual machine environment is as follows:

Let's start the formal experiment:

1) install nginx

① decompresses zlib

Note: there is no need to compile, just decompress it.

② decompresses pcre

Note: there is no need to compile, just decompress it.

③ yum installs nginx dependency packages

④ download and install nginx source code package

Download the source code package of nginx: http://nginx.org/download

Extract, compile, and install the nginx source package:

The configuration, compilation and installation section in the figure is as follows:

. / configure--prefix=/usr/local/nginx1.10-with-http_dav_module--with-http_stub_status_module-- with-http_addition_module--with-http_sub_module-- with-http_flv_module-- with-http_mp4_module--with-pcre=/root/pcre-8.39-- with-zlib=/root/zlib-1.2.8-- with-http_ssl_module--with-http_gzip_static_module-- user=www-- group=www & & make&& make install

Description:-- with-pcre: used to set the source directory of pcre.

-- with-zlib: the source directory used to set up zlib.

Because compiling nginx requires the source code of these two libraries.

⑤ makes soft links.

⑥ nginx profile syntax detection

⑦ starts nginx

⑧ firewall open port 80 exception

⑨ browses the nginx web page on a client browser to test nginx

2) install php

① install libmcrypt

② yum installs php dependency packages

③ unzips, compiles and installs php source packages

The configuration, compilation and installation section in the figure is as follows:

. / configure--prefix=/usr/local/php5.6-- with-mysql=mysqlnd-- with-pdo-mysql=mysqlnd--with-mysqli=mysqlnd-- with-openssl-- enable-fpm-- enable-sockets--enable-sysvshm-- enable-mbstring-- with-freetype-dir-- with-jpeg-dir--with-png-dir-- with-zlib-- with-libxml-dir=/usr-- enable-xml-- with-mhash--with-mcrypt=/usr/local/libmcrypt-- with-config-file- Path=/etc--with-config-file-scan-dir=/etc/php.d-with-bz2-enable-maintainer-zts&& make & & make install

④ copies the php.ini sample file

Modify the / etc/php.ini file, and change short_open_tag to on. The modified content is as follows:

/ / support php short tags

⑤ creates the php-fpm service startup script and starts the service

Provide the php-fpm configuration file and edit:

Note: if it is a separate deployment of nginx and php, it needs to be changed to the real ip of php, and then the php server will have to start with port 9000 exception. In addition, they are all the same as this blog post.

Start the php-fpm service:

3) install mysql

(operate on 192.168.1.9 host)

Because mariadb-libs is installed by default in centos7.2, uninstall it first.

Check to see if mariadb is installed

# rpm-qa | grep mariadb

Uninstall mariadb

Rpm-e-nodeps mariadb-libs

Then for the specific mysql installation, please refer to my mysql5.7.13 installation, blog address:

Http://zpf666.blog.51cto.com/11248677/1908988

Here my mysql is installed. Let's see if the service is started:

Firewall open port 3306 exception:

4. Install the memcached server

(operating on 192.168.1.10 mainframe)

Memcached is event handling based on libevent. Libevent is a library that encapsulates event handling functions such as epoll of Linux and kqueue of BSD operating systems into a unified interface. Even if the number of connections to the server increases, it can still play the performance of Iripple O. Memcached uses this libevent library, so it can perform its high performance on Linux, BSD, Solaris, and other operating systems.

① installs memcached dependent library libevent

② install memcached

③ detects whether memcache is installed successfully

Through the above operation, it is very simple to compile the memcached server. At this point, you can open the server to work.

④ configuration environment variables

Enter the user host directory, edit .bash _ profile, and add a new directory to the system environment variable LD_LIBRARY_PATH, which needs to be added as follows:

The details of the picture are as follows:

MEMCACHED_HOME=/usr/local/memcached

LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$MEMCACHED_HOME/lib

⑤ starts the memcache service

Note:-m is followed by the memory allocated to memcache, which cannot be greater than or equal to the memory of this host. My mainframe is 2048m of memory, I gave 1024m.

Description of startup parameters:

The-d option is to start a daemon.

-m the amount of memory allocated to Memcache for use, in MB (default 64MB).

-l the IP address to listen on. (default: INADDR_ANY, all addresses)

-p set the port on which Memcache's TCP listens, preferably more than 1024.

-u the user running Memcache needs to use this parameter to specify the user if it is currently root.

The-c option is the maximum number of concurrent connections running, and the default is 1024.

-P sets the pid file to save Memcache.

-M returns an error when memory is exhausted instead of deleting an item

-f block size growth factor, default is 1.25

-n minimum allocated space. Key+value+flags defaults to 48.

-h display help

⑥ firewall open port 11211 exception

⑦ refreshes user environment variables

An error was reported while refreshing the environment variable. To execute that command, you can execute the following.

Then refresh the environment variable again:

⑧ writes memcached service startup and stop script

The script reads as follows:

#! / bin/sh

#

# pidfile:/usr/local/memcached/memcached.pid

# memcached_home: / usr/local/memcached

# chkconfig: 35 21 79

# description: Start and stop memcachedService

# Source function library

. / etc/rc.d/init.d/functions

RETVAL=0

Prog= "memcached"

Basedir=/usr/local/memcached

Cmd=$ {basedir} / bin/memcached

Pidfile= "$basedir/$ {prog} .pid"

# interface to listen on (default:INADDR_ANY, all addresses)

Ipaddr= "192.168.1.10"

# listen port

Port=11211

# username for memcached

Username= "root"

# max memory for memcached,default is 64M

Max_memory=1024

# max connections for memcached

Max_simul_conn=10240

Start () {

Echo-n $"Starting service:$prog"

$cmd-d-m $max_memory-u $username-l$ipaddr-p $port-c $max_simul_conn-P $pidfile

RETVAL=$?

Echo

[$RETVAL-eq 0] & & touch/var/lock/subsys/$prog

}

Stop () {

Echo-n $"Stopping service:$prog"

Run_user=$ (whoami)

Pidlist=$ (ps-ef | grep $run_user | grepmemcached | grep-v grep | awk'{print ($2)}')

For pid in $pidlist

Do

Kill-9$ pid

If [$?-ne 0]; then

Return 1

Fi

Done

RETVAL=$?

Echo

[$RETVAL-eq 0] & & rm-f/var/lock/subsys/$prog

}

# See how we were called.

Case "$1" in

Start)

Start

Stop)

Stop

Restart)

Stop

Start

*)

Echo "Usage: $0 {start | stop | restart | status}"

Exit 1

Esac

Exit $RETVAL

The setup script can be executed:

Description:

The role of return in shell script

1) terminate a function.

2) the return command allows you to take an integer argument, which will be used as the exit state of the function.

The code is returned to the script that called the function, and the integer is assigned to the variable $?.

3) Command format: returnvalue

Don't flash yet, the memcache service script is configured, but there is a problem:

As you can see from the figure above, memcache cannot be restarted because the current memcache service is running and the service was started in step ⑤ at the beginning, so we cannot restart it with the service script now. The solution is as follows:

Is the process that kills the running memcache service.

Then start and restart the service:

At this point, you can control the memcache service with the service script.

5. Configure nginx.conf file (operate on nginx&&php host)

① is configured as the following nginx.conf file

Delete all the original things and configure them as follows:

Let me explain: my nginx&&php host here uses 4 cores. If yours is not 4 cores, please modify worker_cpu_affinity. If it is 4 cores, please ignore the problem and copy all the content I have written.

User www www

Worker_processes 4

Worker_cpu_affinity 0001 0010 0100 1000

Error_log logs/error.log

# error_log logs/error.log notice

# error_log logs/error.log info

Pid logs/nginx.pid

Events {

Use epoll

Worker_connections 65535

Multi_accept on

}

Http {

Include mime.types

Default_type application/octet-stream

# log_format main'$remote_addr-$remote_user [$time_local] "$request"'

#'$status$body_bytes_sent "$http_referer"'

#'"$http_user_agent"$http_x_forwarded_for"'

# access_log logs/access.log main

Sendfile on

Tcp_nopush on

Keepalive_timeout 65

Tcp_nodelay on

Client_header_buffer_size 4k

Open_file_cache max=102400 inactive=20s

Open_file_cache_valid 30s

Open_file_cache_min_uses 1

Client_header_timeout 15

Client_body_timeout 15

Reset_timedout_connection on

Send_timeout 15

Server_tokens off

Client_max_body_size 10m

Fastcgi_connect_timeout 600

Fastcgi_send_timeout 600

Fastcgi_read_timeout 600

Fastcgi_buffer_size 64k

Fastcgi_buffers 4 64k

Fastcgi_busy_buffers_size 128k

Fastcgi_temp_file_write_size 128k

Fastcgi_temp_path / usr/local/nginx1.10/nginx_tmp

Fastcgi_intercept_errors on

Fastcgi_cache_path / usr/local/nginx1.10/fastcgi_cache levels=1:2keys_zone=cache_fastcgi:128m inactive=1d max_size=10g

Gzip on

Gzip_min_length 2k

Gzip_buffers 4 32k

Gzip_http_version 1.1

Gzip_comp_level 6

Gzip_types text/plain text/csstext/javascript application/json application/javascriptapplication/x-javascript application/xml

Gzip_vary on

Gzip_proxied any

Server {

Listen 80

Server_name www.benet.com

# charset koi8-r

# access_log logs/host.access.log main

Location ~ * ^. +\. (jpg | gif | png | swf | flv | wma | asf | mp3 | mmf | zip | rar) ${

Valid_referers none blocked www.benet.com benet.com

If ($invalid_referer) {

# return 302 http://www.benet.com/img/nolink.jpg;

Return 404

Break

}

Access_log off

}

Location / {

Root html

Index index.php index.html index.htm

}

Location *\. (ico | jpe?g | gif | png | bmp | swf | flv) ${

Expires 30d

# log_not_found off

Access_log off

}

Location *\. (js | css) ${

Expires 7d

Log_not_found off

Access_log off

}

Location = / (favicon.ico | roboots.txt) {

Access_log off

Log_not_found off

}

Location / status {

Stub_status on

}

Location. *\. (php | php5)? ${

Root html

Fastcgi_pass 127.0.0.1:9000

Fastcgi_index index.php

Include fastcgi.conf

Fastcgi_cache cache_fastcgi

Fastcgi_cache_valid 200 302 1h

Fastcgi_cache_valid 301 1d

Fastcgi_cache_valid any 1m

Fastcgi_cache_min_uses 1

Fastcgi_cache_use_stale error timeoutinvalid_header http_500

Fastcgi_cache_key http://$host$request_uri;

}

# error_page 404 / 404.html

# redirect server error pages to the static page / 50x.html

#

Error_page 500 502 503 504 / 50x.html

Location = / 50x.html {

Root html

}

}

}

② restarts the nginx service

③ writes a php test page

④ uses a browser on the client side to access the test1.php test page

6. Memcache client (operating on nginx&&php host)

Description: memcache is divided into server and client. The server is used to store the cache and the client is used to operate the cache.

Install the php extension Library (phpmemcache)

Description: install the PHP Memcache extension:

You can use the pecl installer that comes with php

# / usr/local/php5.6/bin/pecl install memcache

It can also be installed from the source code, which is the extended library file memcache.so that generates php.

① installs the memcache extension library

The content in the picture is as follows:

. / configure--enable-memcache-- with-php-config=/usr/local/php5.6/bin/php-config & & make & & make install

Remember that after the installation is complete, the last line of your path.

/ usr/local/php5.6/lib/php/extensions/no-debug-zts-20131226/

② modifies php.ini

Add the following line:

The content in the picture is as follows:

Extension=/usr/local/php5.6/lib/php/extensions/no-debug-zts-20131226/memcache.so

③ restarts the php-fpm service

④ test

Description: check that the php extension is installed correctly.

The main thing is to see whether there is a memcache item in the query result.

⑤ accesses test1.php again on the client browser

There is no memcache and no memcache in the session session.

This is because fast-cgi cache is configured in the nginx configuration file. If you do not comment on this line, the test1.php data accessed by the client is cached by fast-cgi. Because of the cache, memcache will not work.

The solution is as follows:

Refresh the test1.php page again, and you can see the following two screenshots:

⑥ to write test2.php test page

Description: this test page tests the write / read data to the memcache server

The details are as follows:

Access test2.php on the client

⑦ uses memcache to realize session sharing

Configure Session in php.ini to memcache mode.

Description: line 1394 is modified

1424 lines are added

The text of the two lines is as follows:

Session.save_handler = memcache

Session.save_path = "tcp://192.168.1.10:11211?persistent=1&weight=1&timeout=1&retry_interval=15"

Note:

Ession.save_handler: set the storage mode of session to memcache. Session data is accessed by file by default. If you want to use custom processing to access session data, such as memcache, modify it to session.save_handler = memcache.

Session.save_path: sets the location where session is stored, and multiple memcache are separated by commas

Use commas when using multiple memcached server "," separated, and can take additional parameters "persistent", "weight", "timeout", "retry_interval", etc.

Something like this:

"tcp://host:port?persistent=1&weight=2,tcp://host2:port2".

Memcache to implement session sharing can also be set in an application:

Ini_set ("session.save_handler", "memcache")

Ini_set ("session.save_path", "tcp://192.168.0.9:11211")

Ini_set () is only valid for the current php page and does not modify the php.ini file itself or affect other php pages.

⑧ testing memcache usability

Restart php-fpm:

In nginx&&php

Create a new / / usr/local/nginx1.10/html/memcache.php file on the server. The contents are as follows:

The test page of this memcache.php mainly tests:

When the client accesses, a session message is generated to the client, which is the session that records the data cached to the memcache at what point in time.

Refresh the page and look at:

Every time you click on the page, the value of now_time keeps rising, while session_time doesn't change, because it's session session persistence.

Visit the URL http://192.168.1.8/memcache.php to see whether the session_time is all Session in memcache, and you can modify different logos on different servers to see if they are on different servers.

You can directly use sessionid to inquire in memcached:

Description: get session_time | i 1493893966; this result indicates that session is working properly.

By default, memcache listens to port 11221. If you want to clear the cache of memecache on the server, you usually use:

Or

Note: after using flush_all, you do not delete the key on memcache, but set it to expire.

Memcache Security configuration:

Because memcache does not have permission control, you need to open memcache to only one web server through iptables.

7. Test memcache cache database data

① first creates a test table on the Mysql server

② writes test scripts

Function: used to test whether memcache cached data successfully.

First, you need to add a read-only database user to the script, in the command format:

Create a test script on the web server as follows:

③ access page test

Note: if mysql appears, it means that there is no content in memcached, and memcached is required to obtain it from the database.

Then refresh the page, if there is a memcache flag indicating that this time the data is obtained from memcached.

Memcached has a cache time of 1 minute by default, and after a minute, memcached needs to retrieve data from the database again.

④ to check the Memcached cache

We need to use the telnet command to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report