Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to troubleshoot Linux server

2025-03-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces the relevant knowledge of "how to troubleshoot Linux server". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Problem: server A cannot communicate with server B

Perhaps the most common network failure in actual work is that one server is unable to communicate with another server on the network. This section will explain the specific treatment methods through examples. In the example, one server named dev1 cannot access the network services in another server named web1 (port 80). The causes of this phenomenon are quite complicated, so we need to test the operation activities step by step, and then through the troubleshooting method to find the root cause of the fault.

In general, when troubleshooting such a problem, you may skip some initial steps (such as checking links, etc.), because some of the subsequent testing steps can play the same diagnostic role. For example, if we test and confirm that DNS works properly, it proves that our host is able to communicate with the local network. However, in this example parsing, we will perform each step cautiously in order to understand the different testing methods at each level.

Is the problem on the client or server side?

You can use a quick test to narrow the scope of the failure, that is, trying to access the corresponding server through another host on the same network. In this example, let's name another server in the same network environment as dev1 dev2 and try to access web1 through it. If dev2 does not have proper access to web1, then it is clear that the problem is web1 or the network between dev1, dev2 and web1. If dev2 can access web1 normally, then we can conclude that there is a good chance that dev1 will go wrong. First, we assume that dev2 has access to web1, so we begin to focus on the dev1 side of troubleshooting.

Is the cable plugged in?

The troubleshooting steps are carried out on the client. First of all, you should make sure that there is no problem with the network connection of your client. To do this, we can use the ethtool program (installed through the ethtool toolkit) to detect the link (that is, the Ethernet device forms a physical connection to the network). If you are not sure which port you are using, run the / sbin/ifconfig command to list all available network ports and their settings. Let's assume that our Ethernet device is on the eth0 port, so:

$sudo ethtool eth0Settings for eth0: Supported ports: [TP] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: internal Auto-negotiation: on Supports Wake-on: pg Wake-on: d Current message level: 0x000000ff (255) Link detected: yes

In the * * line, you can see that the test results show that the link is set to "yes", so dev1 has formed a physical connection with the network. If the result of this test is "no", then we need to personally check the network connection of the dev1 and plug the cable into place. After determining that there is no problem with the physical connection, perform the following steps.

Note: ethtool is more than just a tool for detecting link status, it can also diagnose and correct duplex problems. When the Linux server is connected to the network, it usually automatically negotiates with the network to obtain transmission speed information and whether the network supports full duplex. In this example, the transmission speed is detected by ethtool as 100Mb/ seconds, and the network supports full-duplex mechanism. If you find that the network transmission speed of the host is slow, then the speed and duplex setting is the first focus of attention. Run ethtool as shown earlier, and if you find that the duplex is set to half, run the following command:

$sudo ethtool-s eth0 autoneg off duplex full

It means to use your own Ethernet device instead of eth0.

Is the port normal?

Once you have confirmed the integrity of the physical connection between the server and the network, the next step is to determine whether the network port on the host is configured correctly. In this respect, * is checked by running the ifconfig command and suffixing the port as a parameter. So to test the eth0 settings, you should run the following:

$sudo ifconfig eth0eth0 Link encap:Ethernet HWaddr 00:17:42:1f:18:be inet addr:10.1.1.7 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::217:42ff:fe1f:18be/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:1 errors:0 dropped:0 overruns:0 frame:0 TX packets:11 errors: 0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:229 (229.0 B) TX bytes:2178 (2.1 KB) Interrupt:10

Of the above output, the second line is probably the most interesting because it explains that our host has been configured with a set of IP addresses (10.1.1.7) and a subnet mask (255.255.255.0). Now, you need to make sure that the result of this setting is correct. If the port is not configured, try running sudo ifup eth0, and then run ifconfig again to check if the port is present. If the setting is wrong or the port does not appear, check the / etc/network/interfaces path (Debian system) or / etc/-sysconfig/-network_scripts/ifcfg- path (Red Hat system). In these files, you can correct all errors in the network settings. Now if the host gets its own IP through DHCP, we need to transfer troubleshooting to the DHCP host to find out why we did not get the IP lease cycle correctly.

Is the problem in the local network?

After troubleshooting the problem with the port, we should check whether the default gateway is set and whether we can access it. The route command will show our current routing table, including the default gateway:

$sudo route-nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface10.1.1.0 * 255.255.255.0 U 00 eth0default 10.1.1.1 0.0.0.0 UG 1000 eth0

What is worth paying attention to in the above content is the line, that is, the paragraph of default. Here, you can see that the host gateway is 10.1.1.1. Note that since we added the-n option after the route command, the command does not attempt to resolve these IP addresses to actual host names. This approach allows commands to run faster, but more importantly, we don't want troubleshooting to be affected by any potential DNS errors. If you don't see the configured default gateway here, and the host we want to check is under another subnet (for example, web1 is 10.1.2.5), then this is probably the problem. To solve this problem, you must make sure that the gateway setting is either under the / etc/network/interfaces path of the Debian system or the / etc/-sysconfig/network_scripts/ifcfg- path of the Red Hat system; if the IP is assigned by DHCP, make sure that the gateway is set correctly in the DHCP server. On the Debian system, we run the following command to reset the port:

$sudo service networking restart

In the Red Hat system, we need to run the following command to reset the port:

$sudo service network restart

Please note that even such basic operation commands vary from system to system.

Once we confirm that the gateway configuration is complete, we can use the ping command to confirm the communication effect with the gateway:

$ping-c 5 10.1.1.1PING 10.1.1.1 (10.1.1.1) 56 (84) bytes of data.64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=3.13 ms64 bytes from 10.1.1.1: icmp_seq=2 ttl=64 time=1.43 ms64 bytes from 10.1.1.1: icmp_seq=3 ttl=64 time=1.79 ms64 bytes from 10.1.1.1: icmp_seq=5 ttl=64 time=1.50 ms--- 10.1.1.1 ping statistics-5 packets transmitted, 4 received 20% packet loss, time 4020msrtt min/avg/max/mdev = 1.436 ms 1.966 ms

As you can see, we have been able to ping the gateway correctly, which at least means that you can communicate with the 10.1.1.0 network. If you cannot ping the gateway, there may be several reasons. First of all, this may mean that our gateway automatically blocks ICMP packets. If so, tell the network administrator that blocking ICMP is a nasty bad habit and the resulting security benefits are minimal. Then try to ping another Linux host under the same subnet. If the ICMP is not blocked, then the VLAN setting of the host switch port may be wrong, so we need to further check the access switch.

Does DNS work properly?

Once the ability to communicate with the gateway is confirmed, the next thing to do is to test whether the DNS function is working properly. Both nslookup and dig can be used to troubleshoot problems with DNS, but since only one basic test is needed in this example, we recommend using the nslookup command to see if the server can correctly resolve web1 to IP address:

$nslookup web1Server: 10.1.1.3Address: 10.1.1.3#53Name: web1.example.netAddress: 10.1.2.5

As shown above, the DNS server in the instance works properly. The web1 host extends to web1.example.net and is resolved to the 10.1.2.5 address. Of course, don't forget to make sure that the parsed IP address matches the IP address that web1 should use. In this example, because there is no problem with DNS, we can skip to the next section directly; but sometimes there may be problems with DNS.

No configured name server or cannot access the name server

If you see the following error, this may mean that either our hosts do not have configured name servers, or these servers are inaccessible:

$nslookup web1;; connection timed out; no servers could be reached

In both cases, we need to check the / etc/resolv.conf file to see if there is a configured name server. If we can't find any configured IP addresses here, we need to add a name server to the file. Conversely, if we see something like the following, we need to troubleshoot the connection between the host and the name server through the ping command:

Search example.netnameserver 10.1.1.3

If you cannot ping the name server and its IP address is on the same subnet as our host (in this example, 10.1.1.3 means it is on the same subnet), it means that the name server itself may have crashed. If you cannot ping the name server and its IP address is on a different subnet from our host, jump directly to "can I route to the remote host?" Section, select the content related to name server IP troubleshooting to execute. If you reach the name server through ping and the other party is not responding, jump to "is the remote port open?" Chapter.

Problem with missing search path or name server

After running the nslookup command, we may also get the following error message:

$nslookup web1Server: 10.1.1.3Address: 10.1.1.3 "53" * server can't find web1: NXDOMAIN

Here you can see that the server is not responding because its response indicates that the server cannot find the web1. This may mean two possibilities: *, which may mean that the domain name web1 is not in the DNS search path. This section of the search settings is located in the / etc/resolv.conf file. A better test method is recommended, that is, execute the same nslookup command, but only use the full name domain name (web1.example.net in this example). If it can be parsed correctly, either always use the full domain name in the command, or add the host name to the search path in / etc/resolv.conf (if you don't bother to type it over and over again).

If even the full domain name doesn't work, then the problem must be with the name server. Here we summarize some troubleshooting guidelines for DNS problems. If the name server keeps records, its configuration needs to be checked. If you are using a recursive name server, we must test whether the recursion mechanism of the name server is working by looking for some other domains. If all other domains are listed correctly, we need to see if the problem lies on the remote name server side that contains the above zone.

Can I route to the remote host?

After troubleshooting the DNS problem and seeing that web1 is correctly resolved to IP 10.1.2.5, you need to test whether you can route to the remote host. If ICMP is enabled on our network, then the quickest way to test is ping web1. If the host is connected by ping, we will know that the packet has been routed to the destination, so we can jump directly to "is the remote port open?" Chapter. If you cannot ping web1, try to communicate with another host in the network to see if you can ping it. If we cannot ping any hosts on the remote network, the packets cannot be routed correctly. The routing problem testing tool of * * is traceroute. Once a route trace is established with a host, it tests every packet jump between us and the host. For example, a successful route tracing process between dev1 and web1 would be as follows:

$traceroute 10.1.2.5traceroute to 10.1.2.5 (10.1.2.5), 30 hops max, 40 byte packets1 10.1.1.1 (10.1.1.1) 5.432 ms 5.206 ms 5.472 ms2 web1 (10.1.2.5) 8.039 ms 8.348 ms 8.643 ms

Here we will see that the packet arrives at its gateway (10.1.1.1) from dev1 and then jumps to web1. This means that both the starting location and the target host may use the 10.1.1.1 gateway. If there are more route transit points in your operating environment, the results displayed may be different from the above. If you cannot ping web1, the input result will be as follows:

$traceroute 10.1.2.5traceroute to 10.1.2.5 (10.1.2.5), 30 hops max, 40 byte packets1 10.1.1.1 (10.1.1.1) 5.432 ms 5.206 ms 5.472 ms2 * 3 *

Once we see an asterisk in the output, it means that the problem lies with the gateway. You need to start with the router to see why it cannot route packets between two sets of networks. By tracking, you will see the following:

$traceroute 10.1.2.5traceroute to 10.1.2.5 (10.1.2.5), 30 hops max, 40 byte packets1 10.1.1.1 (10.1.1.1) 5.432 ms 5.206 ms 5.472 ms1 10.1.1.1 (10.1.1.1) 3006.477 ms! h 3006.779 ms! h 3007.072 ms

In this case, we found that the ping operation timed out at the gateway, indicating that the host may have crashed or cannot be accessed through the same subnet. In view of this, if you have not already accessed web1 from other devices under the same subnet, please try ping and other tests.

Note: if some annoying network is still blocking ICMP, don't worry, we still have a way to troubleshoot routes. All you have to do is install the tcptraceroute package (sudo apt-get install tcptraceroute) and run the same route trace command, the only difference is to use tcptraceroute instead of traceroute.

Is the remote port open?

We are now able to route to the target device, but still cannot access the web server on port 80. The next test is intended to check whether the port is open. There are many options we can choose to achieve this goal. To choose one, we can try telnet:

$telnet 10.1.2.5 80Trying 10.1.2.5...telnet: Unable to connect to remote host: Connection refused

If you see that the connection is denied, the port is likely to be closed (maybe Apache is not running on the remote host or is not listening on the port), or the firewall may be blocking our access. If telnet can connect, then congratulations, now you have solved all the network problems. However, if the web service does not work as expected, you need to check the Apache configuration on web1 (troubleshooting the web server is discussed in other chapters of this article).

But I personally prefer to use nmap for port testing compared to telnet, because it tends to detect the impact of firewalls. If you don't already have nmap installed, you can use the package manager to quickly install the nmap package. To test web1, enter the following:

$nmap-p 80 10.1.2.5Starting Nmap 4.62 (http://nmap.org) at 2009-02-05 18:49 PSTInteresting ports on web1 (10.1.2.5): PORT STATE SERVICE80/tcp filtered http

As expected, nmap can generally find out whether the so-called "closed port" is closed directly or behind the firewall. Typically, nmap reports ports that are actually closed as "closed" and ports behind firewalls as "filtered". In our test, it reported its status as "filtering", which meant that there was a firewall blocking and ignored our packets. In this way, you need to check the gateway (10.1.1.1) and all the firewall rules on web1 to see if port 80 is blocked.

Test the remote host locally

At this point, there are two possibilities: either narrow the scope of the failure to a network problem, or assume that the fault lies with the host itself. If you believe that the fault is with the host itself, we can check whether port 80 is available through a series of operations.

Listening port test

The only thing we need to do on web1 is to test whether port 80 is listening. You can use the netstat-lnp command to list all ports that are open and listening. Of course, we can run this command directly and filter out the desired conclusion from the output, but a more efficient way is to use grep to specify the listening status of port 80:

$sudo netstat-lnp | grep: 80tcp 0 0 0.0 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0 of the LISTEN 919/apache.

The * column shows the transport protocol used by the port. The second and third columns show the receive and send queues (where both are set to 0). Now we should pay attention to the fourth column, because it lists the local addresses that the host is listening on. The 0.0.0.0 80 here tells us that the host is listening for data related to its IP in all port 80 traffic. If Apache only listens for the Ethernet address of the web1, we will see 10.1.2.5 80 in the output.

The * * column shows which process keeps the port open. Here we see that the running Apache is listening. If you don't see this in your netstat output, you need to start the Apache server.

Firewall Rul

If the process is running and listening on port 80, some form of firewall in web1 may be causing the problem. Use the iptables command to list all existing firewall rules. If our firewall is disabled, the output should look like this:

$sudo / sbin/iptables-LChain INPUT (policy ACCEPT) target prot opt source destinationChain FORWARD (policy ACCEPT) target prot opt source destinationChain OUTPUT (policy ACCEPT) target prot opt source destination

Notice that the default policy is set to ACCEPT. Although there is nothing wrong with the rule itself, it is still possible for the firewall to discard all packets by default. If this is the case, you will see the output shown below:

$sudo / sbin/iptables-LChain INPUT (policy DROP) target prot opt source destinationChain FORWARD (policy DROP) target prot opt source destinationChain OUTPUT (policy DROP) target prot opt source destination

On the other hand, if a firewall rule blocks port 80, the output should look like this:

$sudo / sbin/iptables-L-nChain INPUT (policy ACCEPT) target prot opt source destinationREJECT tcp-- 0.0.0.0policy ACCEPT 0 0.0.0.0Universe 0 tcp dpt:80 reject-with (icmp-port-unreachableChain FORWARD (policy ACCEPT) target prot opt source destinationChain OUTPUT (policy ACCEPT) target prot opt source destination

In the latter case, we obviously need to modify the firewall rules to allow the server to receive host data traffic from port 80.

Troubleshooting of slow Network condition

In a way, the problem that the network does not work is easier to solve. When a host is unreachable, we can perform the troubleshooting steps discussed earlier until everything is back to normal. But if only the network is slow, it is often trickier to track down the root cause. This section will discuss some related techniques to help you track the various reasons for the slow speed of the network.

DNS problem

Although DNS is often wronged when there is something wrong with the network, DNS should be given priority in terms of poor network performance. For example, if we configure two DNS servers for a domain name, when there is a problem with the * * server, our DNS request will wait 30 seconds before being transferred to the second DNS server. While this is obvious when we use a tool like dig or nslookup, DNS failures tend to slow the network in unexpected ways for everyday use; this is because there are too many services that require DNS to resolve host names to IP addresses. These problems may even affect our network troubleshooting tools.

A variety of network troubleshooting tools, including Ping, tracerouter, oute, netstat and even iptables, will be slow due to DNS problems. By default, all of the above tools try to resolve the IP address to the host name as much as possible. Once there is something wrong with the DNS server, these commands will stagnate in the process of finding the IP address and eventually cause execution to fail. In terms of ping or traceroute, the problem is that the entire ping response cycle takes quite a long time, but the final request round trip time is relatively short. In the case of netstat and iptables, the result of the request can take a long time to output to the screen because the system has been waiting for a DNS request that has timed out.

In all the cases mentioned above, we can easily bypass DNS to ensure the accuracy of troubleshooting results. All enumerated commands can prevent them from resolving IP addresses to host names by adding the-n parameter. I also just got into the habit of adding-n to all commands-as mentioned in the * * chapter-unless I'm sure I want to resolve the IP address.

Note: DNS parsing can also affect our web server performance in other unexpected ways. Some web servers resolve the * IP addresses accessed according to the configuration and record the resulting host names. While this makes the recording information more readable, it also greatly slows down the web server when something goes wrong-such as when there are a large number of visitors. At this point, the web server will be busy resolving these IP addresses and choose to put the service traffic aside.

Using traceroute to solve the problem of slow Network

When the connection between servers and hosts in different networks slows down, it may be difficult to track down the real culprit. Especially when slowdowns occur in the form of latency (that is, the time it takes to respond) without involving global bandwidth, it is only traceroute that can really turn the tide. As mentioned earlier, tracerout is an effective way to test the global connection between client and server in a remote network, but it can also effectively diagnose the potential root causes of network slowness. Because traceroute outputs the time spent on each data forwarding between the current and the target device, we can use it to track overload and network slowness caused by large geographical distances or gateway problems. For example, we use traceroute to check Yahoo servers on both sides of the United States and China, and the output is as follows:

$traceroute yahoo.cntraceroute to yahoo.cn (202.165.102.205), 30 hops max 60 byte packets1 64-142-56-169.static.sonic.net (64.142.56.169) 1.666 ms 2.351 ms 3.038 ms2 2.ge-1-1-0.gw.sr.sonic.net (209.204.191.36) 1.241 ms 1.243 ms 1.229 ms3 265.ge-7-1-0.gw.pao1.sonic.net (64.142.0.198) 3.388 ms 3.612 ms 3.592ms4 xe-1-0-6 .ar1.pao1.us.nlayer.net (69.22.130.85) 6.464 ms 6.607 ms 6.642 ms5 ae0-80g.cr1.pao1.us.nlayer.net (69.22.153.18) 3.320 ms 3.404 ms 3.496 ms6 ae1-50g.cr1.sjc1.us.nlayer.net (69.22.143.165) 4.335 ms 3.955 ms 3.957 ms7 ae1-40g.ar2.sjc1.us.nlayer.net (69.22.143.118) ) 8.748 ms 5.500 ms 7.657 ms8 as4837.xe-4-0-2.ar2.sjc1.us.nlayer.net (69.22.153.146) 3.864 ms 3.863 ms 3.865 ms9 219.158.30.177 (219.158.30.177) 275.648 ms 275.702 ms 275.687 ms10 219.158.97.117 (219.158.97.117) 284.506 ms 284.552 ms 262.416 ms11 219.158.97.93 (219.158 .97.93) 263.538 ms 270.178 ms 270.121 ms12 219.158.4.65 (219.158.4.65) 303.441 ms * 303.465 ms13 202.96.12.190 (202.96.12.190) 306.968 ms 306.971 ms 307.052 ms14 61.148.143.10 (61.148.143.10) 295.916 ms 295.780 ms 295.860 ms...

Since we do not know more details about the network, we can also grasp the movement of the packet simply through the round-trip time. Since the ninth jump, the IP address has changed to 219.158.30.177, which means that the packet has left the United States and arrived in China, and the round trip time of the jump has increased from 3 milliseconds to 275 milliseconds.

Use iftop to find out who is occupying the bandwidth.

Sometimes the slowness of our network is not caused by remote servers or routers, but simply because some processes in the system take up too much available bandwidth. Although it is difficult to determine which processes are using bandwidth intuitively, there are some tools that can help you find out the troublemakers.

Top is such an excellent troubleshooting tool, and its emergence has also led to a series of like-thinking derivatives, such as iotop-- 's ability to determine which processes are taking up most of the disk Ibano performance. Eventually, a tool called iftop came out of nowhere to provide the same functionality in the field of network connectivity. Unlike top, iftop does not pay attention to the process itself, but lists the connection objects that consume the most bandwidth between the user server and the remote IP. For example, we can quickly look at the location of the backup server IP address in the output in iftop to determine whether the backup work is taking up a lot of network bandwidth.

Iftop output diagram

Red Hat and Debian distributions can use packages under the name iftop, but for Red Hat distributions you may need to obtain them from third-party repositories. Once the installation process is complete, we can enable it by running the iftop command on the command line (root permission is required). Just like the top command, we can click the Q key to exit.

At the top of the iftop interface screen is an information bar showing global traffic. Below the information bar are two other columns of information, one as the source IP and the other as the destination IP, filled with arrows to help us understand whether the bandwidth is used to send data out from our own host or to receive data from the remote host. Further down are three other fields that represent the data transfer rate between the two hosts in 2 seconds, 10 seconds, and 40 seconds. Similar to the average load, you can see whether the current bandwidth usage has peaked, or when it has peaked in the past. At the bottom of the screen, we will find the overall statistical results of transmitted data (TX) and received data (RX). Like top, iftop's interface is updated regularly.

The iftop command usually meets all of our troubleshooting needs without adding additional parameters, but sometimes we may want to use some options for special functionality. By default, the iftop command displays statistics for the * * ports found, but on some servers you may use multiple ports, so if we want iftop to take care of the second Ethernet port (that is, eth2 in the instance), enter iftop-I eth2.

By default, iftop attempts to resolve all IP addresses to host names. The disadvantage of this is that once the remote DNS server is slow, the speed of report generation will be terrible. In addition, all DNS resolution activities add additional network traffic, which is displayed in the iftop reporting interface. So to disable network parsing, don't forget to add-n after the iftop command.

Generally speaking, iftop shows the global bandwidth used between hosts, but to help you narrow down the scope of troubleshooting, we may want each host to use exactly which port to communicate. After all, as long as we find the network port that occupies * bandwidth in the host, we can conduct other troubleshooting methods besides determining whether to connect to the FTP port. After starting iftop, press the P key to toggle the show and hide status of the port. However, you may notice that sometimes displaying all port status can cause the host we are watching to be squeezed out of the current display screen. If this happens, we can also press the S or D key to display only the ports from a specific source or destination. Due to the large number of service items, the target host may randomly use multiple ports and peak bandwidth usage, which will cause the tool to fail to recognize the service being used, so it is useful to display only the source port. However, the port on the server may also correspond to the service on the current device. In this case, we can use the netstat-lnp mentioned earlier to find out which port the service is listening on.

Like most Linux commands, iftop has a variety of advanced options. What we've mentioned in this article is enough to cover most troubleshooting needs, but to satisfy your desire to learn more about iftop features, I'll teach you a little tip: just type man iftop, and you can read the user manual included in the package and learn more about it.

This is the end of the content of "how to troubleshoot Linux Server". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report