Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Analysis of ProxySQL highlights, installation, configuration and testing

2025-01-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

The following content mainly brings you ProxySQL highlights, installation, configuration and testing, and other analysis, the knowledge here is slightly different from books, are professional and technical personnel in the process of contact with users, summed up, has a certain experience sharing value, hope to bring help to the majority of readers.

1. Almost all configurations of highlight can be changed online (the configuration data is based on SQLite storage). There is no need to restart proxysql detailed state statistics based on regular and client_addr 's powerful and flexible routing rules. The statistical results are similar to those of pt-query-digest 's analysis of slow logs. It is equivalent to having a unified entry (Designed by a DBA for DBAs) for viewing sql performance and sql statement statistics (auto-reconnect and automatic re-execution of queries using it's Connections Pool).

): if a request is accidentally interrupted during link or execution, proxysql will re-execute the operation query cache function according to its internal mechanism: it is more flexible than mysql's own QC, and it can control which kind of statements can cache support connection pool (connection pool) and support multiplexing in the mysql_query_ rules table according to dimensions such as digest,match_pattern,client_addr, which is different from the connection pool implementation like atlas. There are detailed comparison instructions in the article. 2. Installation

Rpm package download address

Https://github.com/sysown/proxysql/releases recommends rpm installation

Installing from source

Make sure you have installed the equivalent for each of these packages for your operating system:

Automakebzip2cmakemakegcc # > version 4.4 gcc-c++gitopensslopenssl-develpatch

Git clone https://github.com/sysown/proxysql.git

Go to the directory where you cloned the repo (or unpacked the tarball) and run:

Make sudo make install

Compilation time should be around a couple of minutes for the first time around. The configuration file will be found at / etc/proxysql.cnf afterwards.

Encountered an error at the make step:

Error +-fPIC-c-o obj/ProxySQL_GloVars.oo ProxySQL_GloVars.cpp-std=c++11-I../include-I../deps/jemalloc/jemalloc/include/jemalloc-I../deps/mariadb-client-library/mariadb_client/include-I../deps/libconfig/libconfig-1.4.9/lib-I../deps/re2/re2-I../deps/sqlite3/sqlite3-O2-ggdb-Wall cc1plus: error: unrecognized command line option "- std=c++11" make [1]: * * [obj/ProxySQL_GloVars.oo] error 1make [1]: Leaving directory `/ usr/local/src/proxysql-master/lib'make: * [build_lib] error 2

The web search is due to the low version of gcc, so the yum source (and epel source) of centos 6 can only get version 4.4.7.

Package gcc-4.4.7-17.el6.x86_64 is installed and is the latest version package gcc-c++-4.4.7-17.el6.x86_64 is installed and is the latest version, while version 4.8 on centos7

Switch to centos7, and after installing / updating the above software, the make step is completed, but there is another problem with the make install step:

Install-m 0755 src/proxysql / usr/local/bininstall-m 0600 etc/proxysql.cnf / etcinstall-m 0755 etc/init.d/proxysql / etc/init.dif [!-d / var/lib/proxysql]; then mkdir / var/lib/proxysql; fiupdate-rc.d proxysql defaultsmake: update-rc.d: command not found make: * * [install] error 127

Update-rc.d is ubuntu's self-starting script management software, and its use will not be affected if it is not successfully installed.

After the installation is completed, automatically add the service management script to / etc/init.d/proxysql (you need to add / usr/local/bin/ to\ $PATH or soft chain to the\ $PATH directory, and the proxysql command is directly used in the script)

III. Configuration

Configuration file / etc/proxysql.cnf and configuration database file / var/lib/proxysql/proxysql.db, if there is a "proxysql.db" file, the startup process does not parse the proxysql.cnf file; the configuration file is read only on the first startup

Admin interface is officially recommended.

Log in to admin interface:

After a successful login of mysql-uadmin-padmin-P6032-h227.0.0.1, you can modify the login authentication by changing the two variables admin-admin_credentials admin-mysql_ifaces in the global_ variables table of the main library (which is available by default).

Note: admin interface's storage for configuration is based on SQLite, and SQLite supports standard SQL syntax and is basically compatible with mysql. However, it is impossible to switch the database with the use statement. The author makes the use statement compatible (does not report errors), but has no actual effect.

Configure back-end DB server: there are two ways, the difference is: 1. One is to divide the mysql_servers into hostgroup_id (for example, 0 for write group and 1 for read group) when you add server to the hostgroup_id table. The other is to add server to the mysql_servers table without distinguishing between hostgroup_id (for example, all set to 0), and then automatically set the hostgroup_id for the back-end server according to the values in the mysql_replication_ hostgroups table according to the read_only variable values of each server detected by the server. Here we strongly recommend the first way: because the first is completely controlled by us. In the second case, if we mistakenly set the read_only property of the read server to 0, proxysql will reassign it to the write group, which is definitely not expected. 4. Functional Test Experimental Environment MySQL [(none)] > select * from mysql_servers +- -+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +- -- + | 0 | 192.168.1.21 | 3307 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | 1 | 192.168.1.10 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | | 1 | 192.168.1. 4 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | +-- The IP of-+-+ # proxysql server is: 192.168.1.34 load balance test

Configure one master (db1,hostgroup0) and two slaves (db2 and db3,hostgroup1) and add a routing rule to the 'mysql_query_rules' table

Insert into mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply) values (10pm 1pm'^ SELECT',1,1)

All statements that begin with select are routed to hostgroup1 and the rest are routed to hostgroup0

Connect the mysql-client of two server to port 6033 of proxysql, execute 'select @ @ hostname'' to observe the back-end server assigned by them.

By executing the command in the form of mysql-e, you can see that the request is transformed between two read server

[root@db1 ~] # mysql-udm-pairdm'-h292.168.1.34-P6033-e "select @ @ hostname"-s-N.db1 [root@db1 ~] # mysql-udm-pawdm'-h292.168.1.34-P6033-e "select @ @ hostname"-s-Ndb5 [root@db1 ~] # mysql-udm-pairdm'-h292.168.1.34-P6033-e "select @ @ hostname"-s -Ndb5 [root@db1 ~] # mysql-udm-pairdm'-h292.168.1.34-P6033-e "select @ @ hostname"-s-Ndb1 [root@db1 ~] # mysql-udm-pawdm'-h292.168.1.34-P6033-e "select @ @ hostname"-s-Ndb1 [root@db1 ~] # mysql-udm-pairdm'-h292.168.1.34-P6033-e "select @ @ hostname"-s -Ndb5 re-experiment what happens when mysql-e follows multiple statements [root@db1 ~] # mysql-udm-pawdm'- P6033-h292.168.1.34-e "select @ @ hostname Select @ @ hostname;select @ @ hostname "- s-Ndm-web5dm-web5dm-web5

It may be guessed and verified by the above results that all query are routed to the same backend within a link cycle of a client.

But: this is just an illusion (because you happen to use the select @ statement. According to the author: sends a query that implicitly disables multiplexing. For example, if you run "SELECT @ a", ProxySQL will disable multiplexing for that client and will always use the same backend connection.), the load mode of proxysql is currently only one weighted polling (confirmed by the author), and there is no other mechanism.

Back-end server downtime testing

Use sysbench to do a read-only test on the above architecture, and service mysqld stop a server during the process (all tests are under the premise of mysql-monitor_enabled=false)

Test command

Alias sysbench_test='sysbench-- test=/usr/share/doc/sysbench/tests/db/oltp.lua\-mysql-user=dm-- mysql-password='dm'-- mysql-port=6033\-- mysql-host=192.168.1.34-- oltp-tables-count=16\-- num-threads=8 run-- oltp-skip-trx=on-- oltp-read-only=on'

The results are as follows

[root@db3 ~] # sysbench_test sysbench 0.5: multi-threaded system evaluation benchmarkRunning the test with following options:Number of threads: 8Random number generator seed is 0 and will be ignoredThreads startedAlert: mysql_drv_query () for query 'SELECT c FROM sbtest16 WHERE id=4964' failed: 2013 Lost connection to MySQL server during queryALERT: mysql_drv_query () for query' SELECT c FROM sbtest12 WHERE id=4954' failed: 2013 Lost connection to MySQL server during queryALERT: mysql_drv_query () for query 'SELECT c FROM sbtest7 WHERE id BETWEEN 4645 AND 464599 'failed: 2013 Lost connection to MySQL server during query

Didn't we talk about automatic reconnection / reexecution? Make a mistake for Mao (atlas has the same problem)

However, after comparing the errors thrown by the above sysbench command many times, there are differences: if proxysql reports an error, one kind of failed: 2013 Lost connection to MySQL server during queryatlas reports two kinds of failed: 2013 Lost connection to MySQL server during failed: 1317 Query execution was interrupted. Does it mean that re-execute is valid? (no, actually, this is not the right way of thinking)

The test method is wrong: in fact, it should be said that it is essentially wrong to shut down the back-end mysql service to test the "reconnect" feature. A normal shutdown of mysql will kill all processlist on it. We can use mysql-client to do some verification.

Mysql > select @ @ hostname;+-+ | @ @ hostname | +-+ | db1 | +-+ 1 row in set (0.00 sec) * * restart the mysql**mysql > select @ @ hostname;ERROR 2006 (HY000): MySQL server has gone awayNo connection through another tty. Trying to reconnect...Connection id: 4Current database: * NONE * * +-+ | @ @ hostname | +-+ | db1 | +-+ 1 row in set (0.00 sec)

You can see that mysql-client has the function of reconnect, and then let's do an operation to kill mysql-client threads (that is, mysql will kill all threads when it shuts down)

Mysql > select @ @ hostname;+-+ | @ @ hostname | +-+ | db1 | +-+ 1 row in set (0.00 sec) * * query and kill this link on the mysql server * * mysql > select @ @ hostname;ERROR 2013 (HY000): Lost connection to MySQL server during querymysql >

We saw an error report when shutting down a slave library during the previous sysbenche test. In other words, in this scenario, the error report is not the problem of proxysql's' reconnect' mechanism, but more like the problem of 're-execute' mechanism. The kill operation should be understood as an mysql "active" action, not an "exception", so this does not conform to the "re-execute" feature of proxysql, so an error will be reported.

This test should be done in a different way:

Simulate exceptions that cannot be communicated at the network layer

We simulate the exception of unable to link and link break by blocking peoxysql to port 3306 of this machine through iptables on two slave.

Restart the sysbench after the start of the iptables. The established link will be retained, so the new rule should be placed in the first place.

-An INPUT-s 192.168.1.34-p tcp-m tcp-- dport 3306-j DROP

First: start the sysbench read-only test

Then: before the end of the test, modify the iptables of db1 to prevent requests from proxysql from entering

Then: observe the output of sysbench and proxysql.log

Result: sysbench waits for a long time and still cannot finish, and proxysql does not mark db1 as SHUNNED

Proxysql.log output:

2016-09-02 11:37:54 MySQL_Session.cpp:49:kill_query_thread (): [WARNING] KILL QUERY 133on 192.168.1.4 KILL QUERY 33062016-09-02 11:37:54 MySQL_Session.cpp:49:kill_query_thread (): [WARNING] KILL QUERY 136 on 192.168.1.4 Fraser 33062016-09-02 11:37:54 MySQL_Session.cpp:49:kill_query_thread (): [WARNING] KILL QUERY 135 on 192. 168.1.4 MySQL_Session.cpp:49:kill_query_thread 33062016-09-02 11:37:54 MySQL_Session.cpp:49:kill_query_thread (): [WARNING] KILL QUERY 137 on 192.168.1.4 on 33062016-09-02 11:37:54 MySQL_Session.cpp:49:kill_query_thread (): [WARNING] KILL QUERY 138 on 192.168.1.4 KILL QUERY KILL QUERY 33062016-09-02 11:37:54 MySQL_Session.cpp:49:kill_query_thread (): [WARNING] KILL QUERY 134on 192.168.1.4Viru 3306 can be seen in mysql-default_query_timeout=30000 After 30s, proxysql did kill the timeout statement. The time of 30s can be determined by the time of execution of 'service iptables restart' and the time of log in proxysql.log. This setting of 30s is valid.

Then, do the same test on the back-end node (db1) directly, and find that the result is the same, and the sysbench waits for a long time and there is no error report.

From this point, it seems that under this test method, the reason for not achieving the expected results may not lie in proxysql, but in sysbench itself.

By capturing the communication after restarting iptables (for proxysql and mysql respectively in this test scenario), the command is as follows:

~] # date; service iptables restart; tcpdump-I em2 host 192.168.1.35 and port 3306 and host not 192.168.1.10-w / tmp/sysbench-proxysql-network-issue.pacp

~] # date; service iptables restart; tcpdump-I em2 host 192.168.1.34 and port 3306 and host not 192.168.1.10-w / tmp/sysbench-proxysql-network-issue.pacp

It was found that sysbench was "always" retransmitting several requests that could not be returned because of the new iptables rule, so it became "endless waiting" (atlas has the same problem in this scenario)

In theory, proxysql kill dropped some queries and returned an sysbench error, but why didn't sysbench show the error (probably because of the re-execute mechanism)

Finally, by communicating with the author, I found that because the monitor module was not opened, proxysql could not detect what type of error occurred in the backend, so it was unable to perform some operations corresponding to various backend errors (I deliberately shut down the monitor module before)

Test support for prepare statements

Many frameworks use prepare statements to avoid security problems such as SQL injection, and can reduce some overhead in MySQL parsing queries, so it is also important whether prepare statements are supported or not.

First of all, it is necessary to understand the prepare statement from the MySQL protocol level, which can be divided into two types (reference)

Prepared Statements in Application Programs

Or BINARY protocol.

When you grab the packet and analyze the prepare,set,execute process, you can observe that the client sends COM_STMT_PREPARE.

Prepared Statements in SQL Scripts

Or TEXT protocol.

Grab the packet to analyze a prepare,set,execute process, and it can be observed that the client sends COM_QUERY.

With regard to the prepare statement, the author's reply and plan are as follows:

This is how I communicated with the author during the experiment with version 1.2.1. Now 1.3.5 has been released, and since version 1.3, both protocols are supported. However, when it comes to setting the character set, the prepare set names xxx of binary protocol and the normal query (set names' utf8' collate 'utf8_general_ci') (note that proofing rules are added, without proofing rules) statements can not be handled correctly (for example, the laravel framework of php sets the character set in the first form by default).

Fix: after testing version 1.3.7, the small bug of the above two prepare statements has been resolved

MySQL supports two type of prepared statements:

Using API

Using SQL Further details here

SQL support currently not supported either, because PREPARE doesn't disable multiplexing, so it is possible that ProxySQL sends PREPARE in a connection, and EXECUTE into another.

SQL support will be fixed in v1.2.3, see # 684.

API support is planned in ProxySQL 1.3.

5. Connection-pool and multiplexing (and compare Atlas)

First of all, let's accurately understand these two words and reply according to the author.

They are very related to each other

Connection pool is a cache of connections that can be reused.

Multiplexing is a technique to reuse these connections.

Later, I gradually learned about proxysql's multiplexing. Connection pooling is a shared pool with connections created with the backend. There may be multiple sql requests during the execution of a server script:

Atlas connection pooling: allocates a connection from the connection pool, occupies and uses the connection throughout the execution of the script until the script is completed, and then returns the connection to the connection pool and proxysql's connection pool: each sql request during script execution goes through the process of allocating a connection-using-returning

Obviously, in terms of the efficiency of connection pooling, in theory, the proxysql approach will be more efficient and maintain fewer links with DB in most cases.

Test scenario:

10 `select @ @ hostname`, 10 `select @ @ hostname`; ProxySQL/Atlas IP 192.168.1.35; IP of the two read nodes are 192.168.1.37 and 192.168.1.38, respectively; restart ProxySQL/Atlas before each test

There are two tests, and the first test script is as follows (connect once for each command)

! # / bin/shfor I in {1.. 10}; do mysql-uuser-paired passwd'- P6033-h292.168.1.35-e "select @ @ hostname;" # select @ xxx, multiplexingdonefor i in {1... 10} will be disabled; do mysql-uuser-paired passwd'-P6033-h292.168.1.35-e "select id from test.test_table;" # General query done

The script for the second test is as follows (connect once, execute all commands)

For i in {1.. 10}; do query1 + = 'select @ @ hostname;' # select @ xxx, which will disable multiplexing query2 + =' select id from test.test_table;' # ordinary query doneecho $query1 $query2 | mysql-uuser-paired passwd'- P6033-h292.168.1.35

Test and analyze ProxySQL

Through tcpdump packet grabbing wireshark analysis, through ProxySQL to 20 queries of the route and the establishment of connection with the back-end MySQL (through the original port number on ProxySQL) to analyze the connection pool and disable multiplexing. The results are summarized as follows

First test (connect once per command)

Select @ xxx # will disable multiplexing 1.35 source port 1.35 Musashi-> 1.37 42094 42096 42097 42099 42102 (retweet 5 times) 1.35 Melissa-> 1.38 37971 37974 37976 37977 37979 (retweet 5 times) General query 1.35 source port 1.35 Mustok-> 1.37 42105 (repost) 3 times) 1.35 Mushroom-> 1.38 37980 (retweeted 7 times)

Second test (one connection, execute all commands)

Select @ xxx # will disable the multiplexing 1.35 source port 1.35 Muffay-> 1.37 (0 retweets) 1.35 Mercury-> 1.38 37817 (10 retweets) General query 1.35 source ports 1.35 Mercury-> 1.37 (0 retweets) 1.35Mutual-> 1.38 37817 (retweeted 10 times)

Time-sharing analysis of Atlas by comparison

First test (connect once per command)

Select @ xxx 1.35 Source Port 1.35 Murmuri-> 1.37 (0 retweets) 1.35 Meltel-> 1.38 38405 38407 38409 38411 38413 (10 retweets) 38415 38417 38419 38421 38423 Common query 1.35 Source end 1.38 38385 38387 38389 38391 38393 (retweeted 10 times) 38395 38397 38399 38401 38403

The script for the second test is as follows (connect once, execute all commands)

Select @ xxx 1.35 Source Port 1.35 Mustang-> 1.37 42435 (retweeted 5 times) 1.35 Merry-> 1.38 38312 (retweeted 5 times) General query 1.35 source port 1.35 Muffany-> 1.37 42435 (retweeted 5 times) 1.35 Muffany-> 1.38 38312 (retweeted 5 times)

From the above test and analysis results, it can be seen clearly

ProxySQL's load balancing strategy is based on weight polling, but it is not strictly polling one by one. And you can see within a connection: after ProxySQL automatically closes multiplexing due to certain statements (select @ xx or prepare statements), all statements after this link will be routed to the same MySQLatlas for only polling and forwarding, it will not distinguish the query type (for example, some queries are to be routed to the only MySQL at the back end) and, in the `first test (connect once per command) ` Atlas routed all 20 requests to the 1.38 MySQL and created a new connection each time (without using its connection pool). 6. Proxysql checks the health of the backend server

Can take the initiative

Let's take a look at its related parameters: | mysql-monitor_enabled | true | | mysql-monitor_history | 600000 | | mysql-monitor_connect_interval | 120000 | | mysql-monitor_connect_timeout | | | mysql-monitor_ping_interval | 60000 | | mysql-monitor_ping_max_failures | 3 | | mysql-monitor_ping_timeout | 60000 |

According to the author's reply and packet grab analysis, the differences between the two tests are summarized as follows:

Ping is done using mysql_ping ()

Through packet capture analysis,      sends a Request Ping statement through an existing link (in the connection pool).

Connect is done using mysql_real_connect ()

     this is a complete process in which the client establishes a link to the CVM, logs in, logs out, and closes the link.

The different return values of these two functions can help proxysql understand what is wrong with linking to the backend.

Simulate the scenario to verify the above settings and gain a better understanding of its fault detection mechanism:

Two premises

When both web1 and web5 are online in proxysql, clientless access to ProxySQL, that is, only verifies the behavior of the monitor module

Modify max_connections = 3 in the web1 MySQL configuration file; restart web1 MySQL and open several MySQL links in other tty to ensure that proxysql cannot link to the MySQL. At the same time, open the packet grabbing function tcpdump-I em2 host 192.168.1.4 and port 3306-w / tmp/web_shun.pcap on proxysql server, and then compare the above values by wireshark.

Attachments: packet fil

The validity of the following settings can be verified by analysis.

Mysql-monitor_connect_intervalmysql-monitor_ping_intervalmysql-monitor_ping_max_failuresmysql-shun_recovery_time_secmysql-ping_interval_server_msec

Can be passive

Passively, the global variable 'mysql-monitor_enabled' is set to false. In this case, after the back-end server fails, the proxysql will not actively detect it, but will not change the server state to' SHUNNED' or re-change to 'ONLINE' at the runtime layer after a request has been routed "normally" to the server. In this process, the application is unaware (both the mysql-client command and sysbench do not report an error)

The relevant variables are

| | mysql-shun_on_failures | 5 | mysql-shun_recovery_time_sec | 60 | | mysql-query_retries_on_failure | 1 | mysql-connect_retries_on_failure | 5 | | | mysql-connect_retries_delay | 1 | | mysql-connection_max_age_ms | 0 | | mysql-connect_timeout_server | 1000 | | mysql-connect_timeout_server_max | 10000 |

# VII. About the maximum number of connections in proxysql and mysql

First of all, it is clear that the maximum number of connections in MySQL is controlled by the max_connections variable. The maximum number of connections in proxysql has two settings, mysql_users.max_connections and mysql_servers.max_connections.

Let's go straight to my conclusion and extract my issue on github.

If the mysql_servers.max_connections is reached, some of the connections will wait until the mysql-connect_timeout_server_max is reached. Then proxysql will return error message SQLSTATE [HY000]: General error: 9001 Max connect timeout reached while reaching hostgroup 1 after 10000ms.if the mysql_users.max_connections is reached, then the client will see the error message 1040: Too many connectionsIf backend's global variable 'max_connections' is reached and proxysql has no ConnFree with the backend, then client can accomplish the connection with proxysql but all queries will return ERROR 1040 (# HY00): Too many connections, and the monitor will consider this backend is shun and will be loged to the proxysql.log

# VIII. Bug and deficiencies as far as I know

Bug:

For the prepare statement (Prepared Statements in Application Programs), see the explanation above. For example, the prepare statement of the laravel framework), if you change the table structure on backend, in some cases it will cause proxysql to return the following error

SQLSTATE [HY000]: General error: 2057 A stored procedure returning result sets of different size was called. This is not supported by libmysql cannot effectively handle set names xxx statements of prepare statements and set names xxx collate xxxx of non-prepare statements

Deficiency: separation of front and back-end accounts: multiple frontend users can correspond to one or a small number of banckend users to simplify the authorization operation on the back-end MySQL, especially when there are many items and closely related database tables. It can also associate front-end users with related tables or error logs in stats, which is very convenient to locate abnormal items in many projects connected to ProxySQL. For SQL errors, only SQL statements and the error messages returned by MySQL are recorded in the error log, not related to users and database tables. In many projects, it is difficult to find the source with only one SQL statement (in fact, this is related to the first one). When managing through the admin interface, when you make multiple changes, you cannot apply the changes to memory at once or save all the changes to disk at once. For example, if you have made changes to mysql_servers and mysql_users,mysql_query_rules, you have to make the changes in these three aspects take effect and save to disk respectively. By the same token, if you make some changes but eventually want to discard them, it is not easy to restore them to the state they were before they were modified.

# Summary

Stability: at present, part of our business has been switched to ProxySQL, the operation has been stable, and we have not encountered high cpu load or memory consumption.

Operation and maintenance / DBA friendly: with the help of ProxySQL's error log, we found some SQL problems that we didn't notice before (such as the primary key duplicate class, etc.). Some problems are also identified through the related tables in the stats library.

Performance: also better than atlas (see article)

Personally, if we get rid of the above bug and deficiencies (not to mention other feature that may appear), ProxySQL will be more powerful and perfect.

For the above analysis of ProxySQL highlights, installation, configuration and testing, if you need to know more, you can continue to pay attention to the innovation of our industry. If you need professional solutions, you can contact the pre-sales and after-sales on the official website. I hope this article can bring you some knowledge updates.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report