Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Construction and principle explanation of Redis Cluster Model

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

1. Redis cluster scheme

Redis Cluster cluster mode usually has the characteristics of high availability, scalability, distribution, fault tolerance and so on. There are generally two kinds of Redis distributed schemes:

1.1 client Partition Scheme

The client has decided which redis node the data will be stored on or which redis node to read the data from. The main idea is to use the hash algorithm to hash the key of Redis data. Through the hash function, a specific key will be mapped to a specific Redis node.

In-depth analysis of the construction and principle of Redis-Redis cluster model

The representative of the client partition scheme is Redis Sharding,Redis Sharding, which is the Redis multi-instance cluster method commonly used in the industry before Redis Cluster came out. Java's Redis client driver library Jedis supports the Redis Sharding function, namely ShardedJedis and ShardedJedisPool combined with cache pools.

Advantages

Without the use of third-party middleware, the partition logic is controllable, the configuration is simple, there is no association between nodes, it is easy to expand linearly and has strong flexibility.

Shortcoming

The client cannot dynamically add or delete service nodes, and the client needs to maintain the distribution logic on its own. There is no connection sharing between the clients, which will result in a waste of connections.

1.2. Agent partition scheme

The client sends a request to an agent component, which parses the client's data, forwards the request to the correct node, and finally replies the result to the client.

Advantages: simplifying the distributed logic of the client, transparent access of the client, low switching cost, forwarding and storage separation of the agent. Disadvantages: there is an additional proxy layer, which increases the complexity of architecture deployment and performance loss.

In-depth analysis of the construction and principle of Redis-Redis cluster model

The mainstream implementations of proxy partition are Twemproxy and Codis.

1.2.1. Twemproxy

Twemproxy, also known as nutcraker, is an open source redis and memcache proxy server program for twitter. As a proxy, Twemproxy can accept access from multiple programs, forward it to each Redis server in the background according to the routing rules, and then return the same way. Twemproxy has a single point of failure, so it is necessary to combine Lvs and Keepalived to make a highly available solution.

In-depth analysis of the construction and principle of Redis-Redis cluster model

Advantages: wide range of application, high stability, high availability of intermediate agent layer. Disadvantages: no smooth horizontal capacity expansion / reduction, no visual management interface, unfriendly operation and maintenance, failure, can not be transferred automatically.

1.2.2. Codis

Codis is a distributed Redis solution, and for upper-level applications, there is no difference between connecting Codis-Proxy and directly connecting to native Redis-Server. The underlying layer of Codis will handle the forwarding of requests and carry out data migration without downtime. Codis uses a stateless proxy layer, and everything is transparent to the client.

In-depth analysis of the construction and principle of Redis-Redis cluster model

Advantages

It realizes the high availability of upper Proxy and bottom Redis, data fragmentation and automatic balance, provides command line interface and RESTful API, provides monitoring and management interface, and can add and delete Redis nodes dynamically.

Shortcoming

The deployment architecture and configuration are complex, cross-server rooms and multi-tenancy are not supported, and authentication management is not supported.

1.3. Query routing scheme

The client randomly requests any Redis instance, and then the Redis forwards the request to the correct Redis node. Redis Cluster implements a hybrid form of query routing, but instead of forwarding requests directly from one Redis node to another Redis node, it directly redirects (redirected) to the correct Redis node with the help of the client.

In-depth analysis of the construction and principle of Redis-Redis cluster model

Advantages

There is no central node, and the data is distributed on multiple Redis instances according to slot storage, which can smoothly expand / reduce the capacity of nodes, support high availability and automatic failover, and low operation and maintenance costs.

Shortcoming

Rely heavily on Redis-trib tools, lack of monitoring and management, and rely on Smart Client (maintaining connections, caching routing tables, MultiOp and Pipeline support). The detection of the Failover node is too slow and not as timely as the central node ZooKeeper. Gossip messages have some overhead. It is impossible to distinguish hot and cold data according to statistics.

two。 Data distribution

2.1. Data distribution theory

First of all, the distributed database should solve the problem of mapping the whole data set to multiple nodes according to the partition rules, that is, dividing the data set into multiple nodes, and each node is responsible for a subset of the overall data.

In-depth analysis of the construction and principle of Redis-Redis cluster model

There are usually two ways of data distribution: hash partition and sequential partition. The comparison is as follows:

Characteristics of partitioning related products Hash Partition is well dispersed, data distribution is independent of business, cannot be sequentially accessed Redis Cluster,Cassandra,Dynamo Sequential Partition discreteness is easy to tilt, data Distribution is related to Business, BigTable,HBase,Hypertable can be accessed sequentially as Redis Cluster uses hash partitioning rules, hash partitioning is discussed here. There are several common hash partitioning rules, which are described below:

2.1.1. Node remainder partition

Use specific data, such as Redis keys or user ID, and then use the formula: hash (key)% N to calculate the hash value based on the number of nodes N, which is used to determine which node the data is mapped to.

In-depth analysis of the construction and principle of Redis-Redis cluster model

Advantages

The outstanding advantage of this method is simplicity, which is often used in database sub-database and table rules. Generally, pre-partitioning is used to plan the number of partitions in advance according to the amount of data, such as 512 or 1024 tables, to ensure the data capacity that can be supported for a period of time in the future, and then migrate the tables to other databases according to the load. Doubling is usually used in capacity expansion to avoid all data mappings being disrupted, resulting in full migration.

Shortcoming

When the number of nodes changes, such as expanding or shrinking nodes, the data node mapping relationship needs to be recalculated, which will lead to data re-migration.

2.1.2. Consistent hash partition

Consistent hash can solve the stability problem very well, and all storage nodes can be arranged on the end-connected Hash ring, and each key will find the adjacent storage node clockwise after calculating the Hash. When a node joins or exits, it only affects the subsequent nodes that are clockwise adjacent to the node on the Hash ring.

In-depth analysis of the construction and principle of Redis-Redis cluster model

Advantages

Adding and deleting nodes only affect the clockwise adjacent nodes in the hash ring, but have no effect on other nodes.

Shortcoming

Adding and subtracting nodes will cause part of the data in the hash ring to miss. When using a small number of nodes, node changes will affect the data mapping in the hash ring, which is not suitable for the distributed scheme of a small number of data nodes. The common consistent hash partition needs to double or subtract half of the nodes to ensure the balance of data and load.

Note: because of these shortcomings of consistent hash partitions, some distributed systems use virtual slots to improve consistent hashes, such as Dynamo systems.

2.1.3. Virtual slot partition

Virtual slot partition skillfully uses hash space, using well-dispersed hash functions to map all data to a fixed range of integers, which are defined as slots (slot). This range is generally much larger than the number of nodes, for example, the Redis Cluster slot range is 0-16383. Slot is the basic unit of data management and migration in the cluster. The main purpose of adopting large-scale slots is to facilitate data splitting and cluster expansion. Each node is responsible for a certain number of slots, as shown in the figure:

In-depth analysis of the construction and principle of Redis-Redis cluster model

There are currently 5 nodes in the cluster, and each node is responsible for about 3276 slots on average. Due to the use of high-quality hashing algorithm, the data mapped by each slot is usually uniform, and the data is equally divided into 5 nodes for data partition. Redis Cluster uses virtual slot partitioning.

Node 1: contains hash slots 0 to 3276. Node 2: contains hash slots 3277 to 6553. Node 3: contains hash slots 6554 to 9830. Node 4: contains hash slots 9831 to 13107. Node 5: contains hash slots 13108 to 16383.

This structure makes it easy to add or remove nodes. If you add a node 6, you need to get some slots from nodes 1 to 5 to allocate to node 6. If you want to remove node 1, you need to move the slots in node 1 to nodes 2-5, and then remove node 1 without any slots from the cluster.

Since moving hash slots from one node to another does not stop service, no matter adding, deleting or changing the number of hash slots of a node will not cause the cluster to be unavailable.

2.2. Data Partition of Redis

Redis Cluster uses virtual slot partition, and all the keys are mapped to the integer slot of 0mm 16383 according to the hash function. The calculation formula is: slot = CRC16 (key) & 16383. Each node is responsible for maintaining part of the slot and the key value data mapped by the slot, as shown in the figure:

In-depth analysis of the construction and principle of Redis-Redis cluster model

2.2.1. Characteristics of Redis Virtual slot Partition

Decoupling the relationship between data and nodes simplifies the difficulty of node expansion and contraction. The node itself maintains the mapping relationship of the slot, and there is no need for the client or agent service to maintain the slot partition metadata. Mapping query between nodes, slots and keys is supported, which is used in scenarios such as data routing and online scaling.

2.3. Functional limitations of Redis clusters

Redis cluster has some functional limitations compared with standalone, which need developers to know in advance and avoid it when using it.

Support for key bulk operations is limited.

Similar to mset and mget operations, batch operations are only supported for key with the same slot value. Key that maps to different slot values is not supported because performing operations such as mget, mget, and so on may exist on multiple nodes.

Support for key transaction operations is limited.

Only the transaction operation of multiple key on the same node is supported, and the transaction function cannot be used when multiple key are distributed on different nodes.

Key as the minimum granularity of data partition

You cannot map a large key object, such as hash, list, and so on, to different nodes.

Multiple database spaces are not supported

Redis on a single machine can support 16 databases (db0 ~ db15), while only one database space, that is, db0, can be used in cluster mode.

Replication structure supports only one layer

Slave nodes can only copy master nodes, and nested tree replication structures are not supported.

3. Build Redis cluster

Redis-Cluster is an official high availability solution for Redis, with 2 ^ 14 (16384) slot slots in Redis in Cluster. After the Cluster is created, the slots are evenly distributed to each Redis node.

Here's how to start the cluster service of six Redis natively and create a cluster of three masters and three slaves using redis-trib.rb. Building a cluster requires the following three steps:

3.1. Prepare the node

Redis cluster is generally composed of multiple nodes, and the number of nodes is at least 6 to ensure the formation of a complete and highly available cluster. Each node needs to turn on the configuration cluster-enabled yes and let the Redis run in cluster mode.

The node planning of the Redis cluster is as follows:

Node name port number is master / slave master node redis-6379 6379 master node-- redis-6389 6389 slave node redis-6379 redis-6380 6380 master node-redis-6390 6390 slave node redis-6380 redis-6381 6381 master node-redis-6391 6391 slave node redis-6381 Note: it is recommended to make a unified directory for all nodes in the cluster, generally divided into three directories: conf, data and log, to store configuration, data and log-related files respectively. Put the configuration of the six nodes in the conf directory.

3.1.1. Create a directory for each instance of redis

$sudo mkdir-p / usr/local/redis-cluster

$cd / usr/local/redis-cluster

$sudo mkdir conf data log

$sudo mkdir-p data/redis-6379 data/redis-6389 data/redis-6380 data/redis-6390 data/redis-6381 data/redis-6391

Copy the code

3.1.2. Redis profile Management

Configure the redis.conf of each instance according to the following template. The following is only the basic configuration needed to build a cluster, which may need to be modified according to the actual situation.

# redis running in the background

Daemonize yes

# bound CVM port

Bind 127.0.0.1

# data storage directory

Dir / usr/local/redis-cluster/data/redis-6379

# process files

Pidfile / var/run/redis-cluster/$ {Custom} .pid

# Log file

Logfile / usr/local/redis-cluster/log/$ {Custom} .log

# Port number

Port 6379

# turn on cluster mode and remove comment #

Cluster-enabled yes

# configuration of the cluster, and the configuration file starts automatic generation for the first time

Cluster-config-file / usr/local/redis-cluster/conf/$ {Custom} .conf

# request timed out, set 10 seconds

Cluster-node-timeout 10000

# aof log is enabled, if necessary, it will record a log for each write operation

Appendonly yes

Copy the code

Redis-6379.conf

Daemonize yes

Bind 127.0.0.1

Dir / usr/local/redis-cluster/data/redis-6379

Pidfile / var/run/redis-cluster/redis-6379.pid

Logfile / usr/local/redis-cluster/log/redis-6379.log

Port 6379

Cluster-enabled yes

Cluster-config-file / usr/local/redis-cluster/conf/node-6379.conf

Cluster-node-timeout 10000

Appendonly yes

Copy the code

Redis-6389.conf

Daemonize yes

Bind 127.0.0.1

Dir / usr/local/redis-cluster/data/redis-6389

Pidfile / var/run/redis-cluster/redis-6389.pid

Logfile / usr/local/redis-cluster/log/redis-6389.log

Port 6389

Cluster-enabled yes

Cluster-config-file / usr/local/redis-cluster/conf/node-6389.conf

Cluster-node-timeout 10000

Appendonly yes

Copy the code

Redis-6380.conf

Daemonize yes

Bind 127.0.0.1

Dir / usr/local/redis-cluster/data/redis-6380

Pidfile / var/run/redis-cluster/redis-6380.pid

Logfile / usr/local/redis-cluster/log/redis-6380.log

Port 6380

Cluster-enabled yes

Cluster-config-file / usr/local/redis-cluster/conf/node-6380.conf

Cluster-node-timeout 10000

Appendonly yes

Copy the code

Redis-6390.conf

Daemonize yes

Bind 127.0.0.1

Dir / usr/local/redis-cluster/data/redis-6390

Pidfile / var/run/redis-cluster/redis-6390.pid

Logfile / usr/local/redis-cluster/log/redis-6390.log

Port 6390

Cluster-enabled yes

Cluster-config-file / usr/local/redis-cluster/conf/node-6390.conf

Cluster-node-timeout 10000

Appendonly yes

Copy the code

Redis-6381.conf

Daemonize yes

Bind 127.0.0.1

Dir / usr/local/redis-cluster/data/redis-6381

Pidfile / var/run/redis-cluster/redis-6381.pid

Logfile / usr/local/redis-cluster/log/redis-6381.log

Port 6381

Cluster-enabled yes

Cluster-config-file / usr/local/redis-cluster/conf/node-6381.conf

Cluster-node-timeout 10000

Appendonly yes

Copy the code

Redis-6391.conf

Daemonize yes

Bind 127.0.0.1

Dir / usr/local/redis-cluster/data/redis-6391

Pidfile / var/run/redis-cluster/redis-6391.pid

Logfile / usr/local/redis-cluster/log/redis-6391.log

Port 6391

Cluster-enabled yes

Cluster-config-file / usr/local/redis-cluster/conf/node-6391.conf

Cluster-node-timeout 10000

Appendonly yes

Copy the code

3.2. Environmental preparation

3.2.1. Install the Ruby environment

$sudo brew install ruby

Copy the code

3.2.2. Prepare rubygem redis dependencies

$sudo gem install redis

Password:

Fetching: redis-4.0.2.gem (100%)

Successfully installed redis-4.0.2

Parsing documentation for redis-4.0.2

Installing ri documentation for redis-4.0.2

Done installing documentation for redis after 1 seconds

1 gem installed

Copy the code

3.2.3. Copy redis-trib.rb to the root of the cluster

Redis-trib.rb is a tool for managing redis clusters officially released by redis. It is integrated in the source code src directory of redis and encapsulates the cluster commands based on redis into a simple, convenient and practical operation tool.

$sudo cp / usr/local/redis-4.0.11/src/redis-trib.rb / usr/local/redis-cluster

Copy the code

Check whether the redis-trib.rb command environment is correct, and the output is as follows:

$. / redis-trib.rb

Usage: redis-trib

Create host1:port1... HostN:portN-replicas check host:port info host:port fix host:port-timeout reshard host:port-from-to--slots-yes-timeout-pipeline rebalance host:port-weight-auto-weights-use-empty-masters-timeout simulate pipeline threshold add-node new_host:new_port existing_host:existing_port slave master-id del-node host:port node_id set-timeout host:port milliseconds call host:port command arg arg. Arg import host:port-from-copy-replace help (show this help) For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster. Copy the code

Redis-trib.rb is done by the redis author in ruby. The specific functions of the redis-trib.rb command line tool are as follows:

Command action create create cluster check check cluster info view cluster information fix repair cluster reshard online migration slot rebalance balance the number of cluster nodes add-node new nodes join cluster del-node remove nodes from cluster set-timeout set heartbeat connection timeout call execute command import on all nodes in the cluster import external redis data into cluster 3.3. Install the cluster

3.3.1. Start the redis service node

Run the following command to start 6 redis nodes:

Sudo redis-server conf/redis-6379.conf

Sudo redis-server conf/redis-6389.conf

Sudo redis-server conf/redis-6380.conf

Sudo redis-server conf/redis-6390.conf

Sudo redis-server conf/redis-6381.conf

Sudo redis-server conf/redis-6391.conf

Copy the code

After the startup is complete, redis starts in cluster mode to view the process status of each redis node:

$ps-ef | grep redis-server

0 1908 1 0 4:59 01R 00.01 redis-server *: 6379 [cluster]

0 1911 1 0 4:59 01R 00.01 redis-server *: 6389 [cluster]

0 1914 1 0 4:59 01R 00.01 redis-server *: 6380 [cluster]

0 1917 1 0 4:59 01R 00.01 redis-server *: 6390 [cluster]

0 1920 1 0 4:59 01R 00.01 redis-server *: 6381 [cluster]

0 1923 1 0 4:59 01R 00.01 redis-server *: 6391 [cluster]

Copy the code

In the redis.conf file of each redis node, we have configured the file path of cluster-config-file, and when the cluster starts, the conf directory will generate a new cluster node configuration file. The list of files is as follows:

$tree-L 3.

.

├── appendonly.aof

├── conf

│ ├── node-6379.conf

│ ├── node-6380.conf

│ ├── node-6381.conf

│ ├── node-6389.conf

│ ├── node-6390.conf

│ ├── node-6391.conf

│ ├── redis-6379.conf

│ ├── redis-6380.conf

│ ├── redis-6381.conf

│ ├── redis-6389.conf

│ ├── redis-6390.conf

│ └── redis-6391.conf

├── data

│ ├── redis-6379

│ ├── redis-6380

│ ├── redis-6381

│ ├── redis-6389

│ ├── redis-6390

│ └── redis-6391

├── log

│ ├── redis-6379.log

│ ├── redis-6380.log

│ ├── redis-6381.log

│ ├── redis-6389.log

│ ├── redis-6390.log

│ └── redis-6391.log

└── redis-trib.rb

9 directories, 20 files

Copy the code

3.3.2. Redis-trib Associated Cluster Node

Arrange six redis nodes from left to right in a master-to-slave manner.

$sudo. / redis-trib.rb create-- replicas 1 127.0.0.1 replicas 6380 127.0.1

Copy the code

After the cluster is created, redis-trib first allocates 16384 hash slots to three primary nodes, namely redis-6379,redis-6380 and redis-6381. Then point each slave node to the master node for data synchronization.

> Creating cluster

> Performing hash slots allocation on 6 nodes...

Using 3 masters:

127.0.0.1:6379

127.0.0.1:6380

127.0.0.1:6381

Adding replica 127.0.0.1:6390 to 127.0.0.1:6379

Adding replica 127.0.0.1:6391 to 127.0.0.1:6380

Adding replica 127.0.0.1:6389 to 127.0.0.1:6381

> Trying to optimize slaves allocation for anti-affinity

[WARNING] Some slaves are in the same host as their master

M: ad4b9ffceba062492ed67ab336657426f55874b7 127.0.0.1:6379

Slots:0-5460 (5461 slots) master

M: df23c6cad0654ba83f0422e352a81ecee822702e 127.0.0.1:6380

Slots:5461-10922 (5462 slots) master

M: ab9da92d37125f24fe60f1f33688b4f8644612ee 127.0.0.1:6381

Slots:10923-16383 (5461 slots) master

S: 25cfa11a2b4666021da5380ff332b80dbda97208 127.0.0.1:6389

Replicates ad4b9ffceba062492ed67ab336657426f55874b7

S: 48e0a4b539867e01c66172415d94d748933be173 127.0.0.1:6390

Replicates df23c6cad0654ba83f0422e352a81ecee822702e

S: d881142a8307f89ba51835734b27cb309a0fe855 127.0.0.1:6391

Replicates ab9da92d37125f24fe60f1f33688b4f8644612ee

Copy the code

Then enter yes,redis-trib.rb to start the node handshake and slot allocation operation, and the output is as follows:

Can I set the above configuration (type 'yes' to accept): yes

> Nodes configuration updated

> Assign a different config epoch to each node

> Sending CLUSTER MEET messages to join the cluster

Waiting for the cluster to join....

> Performing Cluster Check (using node 127.0.0.1)

M: ad4b9ffceba062492ed67ab336657426f55874b7 127.0.0.1:6379

Slots:0-5460 (5461 slots) master

1 additional replica (s)

M: ab9da92d37125f24fe60f1f33688b4f8644612ee 127.0.0.1:6381

Slots:10923-16383 (5461 slots) master

1 additional replica (s)

S: 48e0a4b539867e01c66172415d94d748933be173 127.0.0.1:6390

Slots: (0 slots) slave

Replicates df23c6cad0654ba83f0422e352a81ecee822702e

S: d881142a8307f89ba51835734b27cb309a0fe855 127.0.0.1:6391

Slots: (0 slots) slave

Replicates ab9da92d37125f24fe60f1f33688b4f8644612ee

M: df23c6cad0654ba83f0422e352a81ecee822702e 127.0.0.1:6380

Slots:5461-10922 (5462 slots) master

1 additional replica (s)

S: 25cfa11a2b4666021da5380ff332b80dbda97208 127.0.0.1:6389

Slots: (0 slots) slave

Replicates ad4b9ffceba062492ed67ab336657426f55874b7

[OK] All nodes agree about slots configuration.

> Check for open slots...

> Check slots coverage...

[OK] All 16384 slots covered.

Copy the code

Perform a cluster check to check the number of hash slots (slot) occupied by each redis node and slot coverage. Among the 16384 slots, the main nodes redis-6379, redis-6380 and redis-6381 occupy 5461, 5461 and 5462 slots respectively.

3.3.3. Log of the redis master node

You can see that the slave node redis-6389 synchronizes data asynchronously from the master node through the BGSAVE command in the background.

$cat log/redis-6379.log

1907:C 05 Sep 16:59:52.960 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo

1907:C 05 Sep 16:59:52.961 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=1907, just started

1907:C 05 Sep 16:59:52.961 # Configuration loaded

1908 it was originally set to M 05 Sep 16 it was originally set to 59 it was originally set to 52.964 * 1908.

1908 ad4b9ffceba062492ed67ab336657426f55874b7 M 05 Sep 16 ad4b9ffceba062492ed67ab336657426f55874b7 52.965 * No cluster configuration found, 52.965

1908 Sep 05 Sep 16 port=6379 52.967 * Running mode=cluster.

1908:M 05 Sep 16:59:52.967 # Server initialized

1908 M 05 Sep 16 Zhou 59 52.967 * Ready to accept connections

1908:M 05 Sep 17:01:17.782 # configEpoch set to 1 via CLUSTER SET-CONFIG-EPOCH

1908:M 05 Sep 17:01:17.812 # IP address for this node updated to 127.0.0.1

1908:M 05 Sep 17:01:22.740 # Cluster state changed: ok

1908 Sep M 05 17 asks for synchronization 01 asks for synchronization 23.681 * asks for synchronization 127.0.0.1

Replication ID mismatch (Slave asked for '4c5afe96cac51cde56039f96383ea7217ef2af41bicycle, my replication IDs are' 037b661bf48c80c577d1fa937ba55367a3692921' and '000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

1908 disk M 05 Sep 1715 01 disk * disk

1908 Sep M 05 17 01 23.682 * Background saving started by pid 1952

1952 C 05 Sep 17 01 V 23.683 * DB saved on disk

1908 Sep M 05 17V 01 23.749 * Background saving terminated with success

1908 Sep M 05 17 succeeded 01 succeeded 23.752 * succeeded 127.0.0.1 succeeded

Copy the code

Referenc

Redis Development and Operation and maintenance

In addition, I gave you benefits, a learning brain map about Redis.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report