Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Summary of basic knowledge of Raid

2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces "the summary of basic knowledge of Raid". In daily operation, I believe that many people have doubts about the summary of basic knowledge of Raid. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful for you to answer the doubts of "summary of basic knowledge of Raid"! Next, please follow the editor to study!

1. What is Raid?

RAID (Redundant Array of Inexpensive Disks) is called a redundant array of cheap disks. The basic idea of RAID is to combine several small, inexpensive disks into a disk group that achieves or exceeds the performance of a large, expensive disk.

At present, RAID technology is roughly divided into two kinds: hardware-based RAID technology and software-based RAID technology. Among them, the RAID function can be realized through the built-in software under Linux, so that the expensive hardware RAID controller and accessories can be saved and the IO performance and reliability of the disk can be greatly enhanced. Because the RAID function is realized by software, it is flexible in configuration and convenient in management. At the same time, using the software RAID, we can also merge several physical disks into a larger virtual device, thus achieving the purpose of performance improvement and data redundancy. Of course, the hardware-based RAID solution is slightly better than the software-based RAID technology in performance and service performance, such as the ability to detect and repair multi-bit errors, automatic error disk detection and array reconstruction.

2.RAID level introduction

The commonly used RAID classes are RAID 0, RAID1, RAID 3, RAID 4 and RAID 5, plus two-in-one type RAID 01st (or RAID 10). Let's first compare the advantages and disadvantages of these RAID levels:

Relative advantages and disadvantages of RAID level

RAID 0 has the fastest access speed and no fault tolerance.

RAID 1 has a high cost of full fault tolerance

RAID 3 write performance preferably without multitasking

RAID 4 with multitasking and fault tolerance Parity disk drives cause performance bottlenecks

RAID 5 has overhead when writing with multitasking and fault tolerance

RAID 0+1/RAID 10 has high speed and high cost of full fault tolerance.

2.1Features and applications of RAID0

Also known as stripe mode (striped), where consecutive data is distributed over multiple disks for access, as shown in the figure. When the system has data requests, it can be executed by multiple disks in parallel, and each disk executes its own part of the data request. This parallel operation on data can make full use of the bandwidth of the bus and significantly improve the overall access performance of the disk. Because reads and writes are done in parallel on the device, read and write performance will increase, which is usually the main reason for running RAID 0. However, RAID 0 has no data redundancy, and if the drive fails, no data can be recovered.

2.2 Features and applications of RAID 1

RAID 1, also known as Mirroring, is a fully redundant mode, as shown in the figure. RAID 1 can be used for two or 2xN disks and uses 0 or more spare disks that are written to the mirror disk each time the data is written. This array is highly reliable, but its effective capacity is reduced to half of the total capacity, and the size of these disks should be equal, otherwise the total capacity is only the size of the smallest disk.

2.3Features and applications of RAID 3

RAID 3 first does the XOR operation of the data, and after the Parity Data is generated, the data and Parity Data are written to the member disk drive in parallel access mode, so it has the advantages and disadvantages of the parallel access mode. Further, with each data transfer, RAID 3 updates the entire Stripe (that is, the data of the relative position of each member disk drive is updated together), so that there is no need to read out the existing data of some disk drives, XOR the new data, and then write it (this happens in RAID 4 and RAID 5, which is commonly referred to as Read, Modify, Write Process). Let's translate it into the process of reading, revising and writing. Therefore, of all RAID levels, RAID 3 has the best write performance.

The Parity Data of RAID 3 is generally stored in a dedicated Parity Disk, but because each piece of data updates the entire Stripe, the Parity Disk of RAID 3 is not like the Parity Disk of RAID 4, which will cause access bottlenecks.

The parallel access mode of RAID 3 needs the support of special functions of RAID controller to achieve synchronous control of disk drives, and the above advantages of writing performance can be replaced by the current Caching technology, so it is generally believed that the application of RAID 3 will gradually fade out of the market.

Because of its excellent writing performance, RAID 3 is especially suitable for large-scale, continuous file writing applications, such as drawing, image, video editing, multimedia, data warehousing, high-speed data capture and so on.

2.4Features and applications of RAID 4

It takes three or more disks to create RAID 4, which stores the check information on one drive and writes the data to other disks as RAID 0, as shown in the figure. Because a disk is reserved for parity information, the size of the array is (Nmurl) * S, where S is the smallest drive in the array. As in RAID 1, disks should be equal in size.

If a drive fails, you can use the parity information to rebuild all the data. If both drives fail, all data will be lost. The reason for the infrequent use of this level is that the verification information is stored on a drive. This information must be updated each time you write to another disk. Therefore, it is easy to cause a bottleneck in checking the disk when writing a large amount of data, so this level of RAID is rarely used at present.

RAID 4 adopts independent access mode and uses a single proprietary Parity Disk to store Parity Data. RAID 4's Strip data is long, and it can perform Overlapped Imax O, so its read performance is good.

However, because a single proprietary Parity Disk is used to store the Parity Data, it will cause a big bottleneck when writing. Therefore, RAID 4 is not widely used.

Features and applications of 2.5 RAID 5

RAID 5 is probably the most useful RAID mode when you want to combine a large number of physical disks and still retain some redundancy. RAID 5 can be used on three or more disks and use 0 or more spare disks. Just like RAID 4, the resulting RAID5 device size is (NMUR 1) * S.

The biggest difference between RAID5 and RAID4 is that the check information is evenly distributed across the drives, as shown in figure 4, thus avoiding the bottleneck in RAID 4. If one of the disks fails, all data can remain unchanged because of the parity information. If a spare disk is available, data synchronization will begin immediately after the device fails. If both disks fail at the same time, all data will be lost. RAID5 can withstand one disk failure, but not two or more disk failures.

RAID 5 also adopts independent access mode, but its Parity Data is written separately to each member disk drive, so it not only has the multitasking performance of Overlapped Imax O, but also breaks away from the write bottleneck of a single proprietary Parity Disk such as RAID 4. However, when the data is written in RAI?D 5, it is still slightly dragged down by the "reading, correcting and writing process".

Because RAID 5 can perform Overlapped Overlapped O multitasking, the greater the number of member disk drives of RAID 5, the higher its performance. Because a disk drive can only perform one Thread at a time, the more disk drives, the more Thread can be Overlapped, of course, the higher the performance. On the other hand, the more disk drives there are, the higher the probability of disk drive failures in the array, and the lower the reliability, or MTDL (Mean Time to Data Loss), of the entire array.

Because RAID 5 distributes Parity Data across disk drives, it fits the characteristics of XOR technology. For example, when there are several write requirements occurring at the same time, the data to be written and the Parity Data may be scattered among different member disk drives, so the RAID controller can take full advantage of the Overlapped I Universe O while allowing several disk drives to access separately, thus greatly improving the overall performance of the array.

Basically, in the multi-person and multi-task environment, applications with frequent access and not a large amount of data are suitable to choose RAID 5 architecture, such as enterprise file server, WEB server, online transaction system, e-commerce and so on.

Characteristics and Application of 2.6RAID One1 (RAID 10)

RAID 0+1/RAID 10, which combines the advantages of RAID 0 and RAID 1, is suitable for applications with high speed, full fault tolerance and, of course, a lot of money. The principle of RAID 0 and RAID 1 is very simple. Together, it is still very simple. We are not going to introduce it in detail. Instead, we should talk about whether RAID 0 over RAID 1 should be RAID 0 over RAID 1 or RAID 1 over RAID 0, that is to say, whether to make multiple RAID 1 into RAID 0 or multiple RAID 0 into RAID 1?

RAID 0 over RAID 1

Suppose we have four disk drives, and every two disk drives are made into RAID 1, and then two RAID 1 are made into RAID 0. This is RAID 0 over RAID 1:

(RAID 1) A = Drive A1 + Drive A2 (Mirrored)

(RAID 1) B = Drive B1 + Drive B2 (Mirrored)

RAID 0 = (RAID 1) A + (RAID 1) B (Striped)

RAID 1 over RAID 0

Suppose we have four disk drives, and every two disk drives are made into RAID 0 first, and then two RAID 0 are made into RAID 1. This is RAID 1 over RAID 0:

(RAID 0) A = Drive A1 + Drive A2 (Striped)

(RAID 0) B = Drive B1 + Drive B2 (Striped)

RAID 1 = (RAID 1) A + (RAID 1) B (Mirrored)

Under this architecture, if (RAID 0) A has a disk drive failure, (RAID 0) An is destroyed, of course RAID 1 can still work properly; if (RAID 0) B also has a disk drive failure, (RAID 0) B is also destroyed, then both disk drives of RAID 1 are considered failed, and the whole RAID 1 data is destroyed.

Therefore, RAID 0 OVER RAID 1 should have higher reliability than RAID 1 OVER RAID 0. Therefore, we suggest that when using the RAID 0+1/RAID 10 architecture, we should first make RAID 1, and then make several RAID 1 into RAID 0.

3. How to choose the Raid level

Which one of RAID 012345 is right for you is not just a matter of cost, fault tolerance and transmission performance considerations as well as future scalability should meet the needs of the application.

The application of RAID in the market is not new. Many people have a general understanding of the basic concept of RAID and the distinction between different RAID LEVEL. However, in practical application, we find that many users still do not know exactly how to choose a suitable RAID LEVEL, especially for the choice between RAID 0 # 1 (10), RAID 3 and RAID 5.

3.1 access mode of RAID strips to "striped"

In RAID systems that use data striping (Data Stripping), there are two ways to access member disk drives:

Parallel access (Paralleled Access)

Independent access (Independent Access)

RAID 2 and RAID 3 are in parallel access mode.

RAID 0, RAID 4, RAID 5 and RAID 6 adopt independent access mode.

3.2 parallel access mode

In the parallel access mode support, the spindle motors of all disk drives are precisely controlled so that the positions of each disk are synchronized with each other, and then a very short Icano data transfer is made for each disk drive, so that every IUnio instruction from the host is evenly distributed to each disk drive.

In order to achieve the function of parallel access, every disk drive in RAID must have almost the same specification: the rotational speed must be the same; the head search speed (Access Time) must be the same; the capacity and access speed of Buffer or Cache must be the same; the speed of CPU processing instructions should be the same; so must the speed of the O Channel. All in all, to use parallel access mode, all member disk drives in RAID should use the same brand and the same type of disk drives.

3.2.1 basic working principles of parallel access

Assuming that RAID**** has four disk drives of the same specification, disk drives A, B, C, and D, we are dividing the timeline into T0, T1, T2, T3, and T4:

T0: the RAID controller transfers the first data to the Buffer of A, and the Buffer of disk drives B, C and D is empty and is waiting

T1: the RAID controller transfers the second data to the Buffer,An of B and starts writing the data in the Buffer to the sector. The Buffer of disk drives C and D is empty and is waiting.

T2: the RAID controller transfers the third data to the Buffer,B of C and begins to write the data in the Buffer to the sector. A has completed the write operation. The Buffer of disk drives D and An are empty, waiting.

T3: the RAID controller transfers the fourth data to the Buffer,C of D and begins to write the data in Buffer to the sector. B has completed the write operation. The Buffer of disk drives An and B are empty and are waiting.

T4: the RAID controller transfers the fifth stroke of data to the Buffer,D of An and begins to write the data in Buffer to the sector. C has completed the write operation. The Buffer of disk drives B and C are empty and are waiting.

In this way, the RAID controller will not be processed until the next Imax O instruction from the host is processed. The point is that when any disk drive is ready to write data to the sector, the destination sector must just be improved under the head. At the same time, the data length transmitted by the RAID controller to a disk drive in turn must be just right to match the rotational speed of the disk drive, otherwise the performance of miss,RAID will be greatly reduced once it occurs.

3.2.2 Best Application of parallel access to RAID

The architecture of parallel access RAID, with its fine motor control and distributed data transmission, maximizes the performance of each disk drive in the array, while making full use of the bandwidth of Storage Bus, so it is especially suitable for large, continuous data file access applications, such as:

Image and video file server

Data warehousing system

Multimedia database

Electronic library

Prepress or negative output file server

Other large and continuous file servers

Due to the characteristics of parallel access RAID architecture, the RAID controller can only handle one Imax O request at a time and can not perform Overlapping multitasking, so it is very unsuitable for applications in the environment with frequent Imax O times, random data access and small amount of data transmission per stroke. At the same time, because parallel access cannot perform Overlapping multitasking, there is no way to "hide" the time of disk drive search (seek), and in every first data transfer, you have to wait for the first disk drive rotation delay (rotational latency), which is an average of half a circle, and if you use a disk drive of 10,000 revolutions, you need to wait an average of 50 usec. So mechanical latency is the biggest problem in parallel access architecture.

3.3 Independent access mode

Compared with the parallel access mode, the independent access mode does not control the synchronous rotation of the member disk drives. the access to each disk drive is independent and there is no restriction on sequence and time lattice. at the same time, the amount of data transmitted is relatively large. Therefore, the stand-alone access mode can make use of higher-level functions such as overlapping multitasking and Tagged Command Queuing to "hide" the mechanical time delays (Seek and Rotational Latency) of the above disk drives.

Because the independent access mode can do overlapping multitasking, and can simultaneously handle different I Requests from multiple hosts, it can achieve maximum performance in a multi-host environment such as Clustering.

3.3.1 Best Application of Independent access RAID

Because the independent access mode can accept multiple I _ Requests at the same time, it is especially suitable for systems with frequent data access and small amount of data per transaction. For example:

Online trading system or e-commerce application

Multi-user database

ERM and MRP system

File server for small files

4. Create and maintain Raid

4.1 mdadm

Soft RAID is created and maintained through the mdadm tool in the Linux server. Mdadm is very convenient and flexible in creating and managing soft RAID. The common parameters of mdadm are as follows:

*-- create or-C: create a new soft RAID followed by the name of the raid device. For example, / dev/md0,/dev/md1, etc.

*-- assemble or-A: load an existing array, followed by the name of the array and device.

*-- detail or-D: outputs the details of the specified RAID device.

*-- stop or-S: stops the specified RAID device.

*-- level or-l: set the level of RAID. For example, setting "--level=5" means that the level at which the array is created is RAID 5.

*-- raid-devices or-n: specifies the number of active disks in the array.

*-- scan or-s: scan the configuration file or / proc/mdstat file to search for the configuration information of the soft RAID. This parameter cannot be used alone and can only be used by configuring other parameters.

Here is an example of how to implement the function of soft RAID through mdadm.

4.1.1 create Partition

[example 1]

There are 4 free hard drives on a machine, namely / dev/sdb, / dev/sdc, / dev/sdd and / dev/sde, and use these four hard drives to create a RAID 5. The specific steps are as follows:

First use the "fdisk" command to create a partition on each hard disk, as follows:

Root@xiaop-laptop:/# fdisk / dev/sdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only

Until you decide to write them. After that, of course, the previous

Content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w (rite)

Command (m for help): n # create a new partition by n

Command action

E extended

P primary partition (1-4) # enter p to choose to create a primary partition

P

Partition number (1-4): 1 # enter 1 to create the first primary partition

First cylinder (1-102, default 1): # enter directly, select the partition to start the cylinder, and here we start at 1.

Using default value 1

Last cylinder or + size or + sizeM or + sizeK (1-102, default 102):

Using default value 102

Command (m for help): W # and then enter w write disk

The partition table has been altered!

Calling ioctl () to re-read partition table.

Syncing disks.

Do the same for the other hard drives, follow this procedure to do the same on the other two disks

When all is done, running fdisk-l should see the following message:

Disk / dev/sdb: 214 MB, 214748160 bytes

64 heads, 32 sectors/track, 204 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System

/ dev/sdb1 1 204 208880 fd Linux raid autodetect

Disk / dev/sdc: 214 MB, 214748160 bytes

64 heads, 32 sectors/track, 204 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System

/ dev/sdc1 1 204 208880 fd Linux raid autodetect

Disk / dev/sdd: 214 MB, 214748160 bytes

64 heads, 32 sectors/track, 204 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System

/ dev/sdd1 1 204 208880 fd Linux raid autodetect

We can see that there is a partition on each of the above three disks, and the size of the partition is the same.

4.1.2 create RAID 5

After creating the four partitions / dev/sdb1, / dev/sdc1, / dev/sdd1, / dev/sde1, you can create RAID 5, in which / dev/sde1 is set as a standby device, and the rest is an active device. The role of a standby device is that if a device is damaged, it can be replaced with a standby device immediately. The operation commands are as follows:

Root@xiaop-laptop:/# mdadm-- create / dev/md0-- level=5-- raid-devices=3-- spare-devices=1 / dev/sd [bmure] 1

Mdadm: array / dev/md0 started.

Where "--spare-devices=1" indicates that there is only one spare device in the current array, that is, "/ dev/sde1" as a standby device. If there are multiple standby devices, set the value of "--spare-devices" to the corresponding number. After the successful creation of the RAID device, you can view the details of the RAID with the following command:

Root@xiaop-laptop:/# mdadm-detail / dev/md0

/ dev/md0:

Version: 00.90.01

Creation Time: Mon Jan 22 10:55:49 2007

Raid Level: raid5

Array Size: 208640 (203.75 MiB 213.65 MB)

Device Size: 104320 (101.88 MiB 106.82 MB)

Raid Devices: 3

Total Devices: 4

Preferred Minor: 0

Persistence: Superblock is persistent

Update Time: Mon Jan 22 10:55:52 2007

State: clean

Active Devices: 3

Working Devices: 4

Failed Devices: 0

Spare Devices: 1

Layout: left-symmetric

Chunk Size: 64K

Number Major Minor RaidDevice State

0 8 17 0 active sync / dev/sdb1

1 8 33 1 active sync / dev/sdc1

2 8 49 2 active sync / dev/sdd1

3 8 65-1 spare / dev/sde1

UUID: b372436a:6ba09b3d:2c80612c:efe19d75

Events: 0.6

4.1.3 create a configuration file for RAID

The configuration file of RAID is called "mdadm.conf", which does not exist by default, so it needs to be created manually. The main purpose of this configuration file is that it can automatically load soft RAID when the system starts, and it is also convenient for future management. " The "mdadm.conf" file includes all devices specified for the soft RAID by the DEVICE option, and the device name, the RAID level, the number of active devices in the array, and the UUID number of the devices specified by the ARRAY option. Generate the RAID configuration file as follows:

Root@xiaop-laptop:/# mdadm-- detail-- scan > / etc/mdadm.conf

However, the content of the current generated "mdadm.conf" file does not conform to the specified format, so it does not take effect, so you need to manually modify the content of the file to the following format:

Root@xiaop-laptop:/# vi / etc/mdadm.conf

DEVICE / dev/sdb1 / dev/sdc1 / dev/sdd1 / dev/sde1

ARRAY / dev/md0 level=raid5 num-devices=3 UUID=b372436a:6ba09b3d:2c80612c:efe19d75

If no configuration file for RAID is created, the soft RAID needs to be manually loaded before it can be used after each system boot. The command to manually load the soft RAID is:

Root@xiaop-laptop:/# mdadm-- assemble / dev/md0 / dev/sdb1 / dev/sdc1 / dev/sdd1 / dev/sde1

Mdadm: / dev/md0 has been started with 3 drives and 1 spare.

4.1.4 create a file system

Then you just need to create a file system on a RAID device and use it in the same way that you create a file system on a RAID device as you do on a partition or disk. The file system command to create ext3 on the device "/ dev/md0" is as follows:

Root@xiaop-laptop:/# mkfs.ext3 / dev/md0

After creating the file system, mount the device and you can use it normally. If you want to create other levels of RAID, the steps are basically the same as creating RAID 5, except that when you specify the "--level" value, you need to set it to the appropriate level.

4.2 maintenance of soft RAID

Although soft RAID can ensure the reliability of data to a large extent, in daily work, it may be necessary to adjust RAID and do not rule out the possibility of physical media damage of RAID equipment.

When you encounter these situations, you can also use the "mdadm" command to do so The complete process of replacing a RAID failed disk is also described through an example below.

4.2.1 simulate a failed disk

[example 2]

Based on the previous [example 1], assume that when the "/ dev/sdc1" device fails, replace a new disk. The whole process is described in detail as follows:

In practice, when soft RAID detects that a disk is faulted, it will automatically mark the disk as a failed disk and stop reading and writing to the failed disk, so you need to mark / dev/sdc1 as the faulted disk as follows:

Root@xiaop-laptop:/# mdadm / dev/md0-- fail / dev/sdc1

Mdadm: set / dev/sdc1 faulty in / dev/md0

Because RAID 5 in [example 1] sets up a standby device, when a disk is marked as a failed disk, the spare disk will automatically replace the failed disk, and the array can be rebuilt in a short time. The status of the current array can be seen through the "/ proc/mdstat" file, as follows:

Root@xiaop-laptop:/# cat / proc/mdstat

Personalities: [raid5]

Md0: active raid5 sde1 [3] sdb1 [0] sdd1 [2] sdc1 [4] (F)

208640 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]

[= >.] Recovery = 26.4% (28416 Compact 104320) finish=0.0min speed=28416K/sec

Unused devices:

The above information indicates that the array is being rebuilt. When a device fails or is marked with a failure, the square brackets of the corresponding device will be marked with (F), such as "sdc1 [4] (F)", where the first digit of "[3comp2]" represents the number of devices contained in the array, and the second digit represents the number of active devices. Because there is currently a failed device, the second digit is 2. At this time, the array is running in degraded mode, and although the array is still available, it does not have data redundancy, while "[Ubunu]" indicates that the devices that can be used normally in the current array are / dev/sdb1 and / dev/sdd1, and if the device "/ dev/sdb1" fails, it will become [_ UU].

After the data is rebuilt, when you check the array status again, you will find that the current RAID device is back to normal, as follows:

Root@xiaop-laptop:/# cat / proc/mdstat

Personalities: [raid5]

Md0: active raid5 sde1 [1] sdb1 [0] sdd1 [2] sdc1 [3] (F)

208640 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

Unused devices:

4.2.2 removing a failed disk

Since "/ dev/sdc1" has failed, of course, remove the device. Remove the failed disk as follows:

Root@xiaop-laptop:/# mdadm / dev/md0-- remove / dev/sdc1

Mdadm: hot removed / dev/sdc1

Where "- remove" means to remove a disk from the specified RAID device, or "- r" can be used instead of this parameter.

4.2.3 add a new hard drive

Before adding a new hard disk, you also need to create a partition for the new hard disk. For example, if the device that adds the new hard disk is named "/ dev/sdc1", the details are as follows:

Root@xiaop-laptop:/# mdadm / dev/md0-- add / dev/sdc1

Mdadm: hot added / dev/sdc1

The meaning of "--add" is the opposite of the previous "--remove". It is used to add a disk to a specified device, or "- a" can be used instead of this parameter.

Because RAID 5 in [example 1] has a standby device, RAID 5 can operate normally without doing anything. But if a disk fails again, it will cause RAID 5 to have no data redundancy, which is too unsafe for devices that store important data. Then the "/ dev/sdc1" added to RAID 5 appears in the array as a standby device, as follows:

Root@xiaop-laptop:/# mdadm-detail / dev/md0

/ dev/md0:

……

……

Number Major Minor RaidDevice State

0 8 17 0 active sync / dev/sdb1

1 8 65 1 active sync / dev/sde1

2 8 49 2 active sync / dev/sdd1

3 8 33-1 spare / dev/sdc1

UUID: b372436a:6ba09b3d:2c80612c:efe19d75

Events: 0.133

At this point, the study on the summary of the basic knowledge of Raid is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report