In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
In this issue, the editor will bring you what to do when the disk IO in the windows system is too high. The article is rich in content and analyzes and describes it from a professional point of view. I hope you can get something after reading this article.
The treatment method when the disk IO is too high is aimed at the guiding operation of the high disk IO load in the system.
Main command: echo deadline > / sys/block/sda/queue/scheduler
Note: the following content is for reference only. If the disk IO is really large, it is a database, which can be used for read-write separation or sub-library operation to reduce disk pressure. If files are available, raid can be used to reduce pressure.
1) Summary of the Icano scheduler:
1) when a data block is written to or read out from the device, the request is placed in a queue waiting to be completed.
2) each block device has its own queue.
3) the I _ pico scheduler is responsible for maintaining the order of these queues in order to make more efficient use of the media I / O scheduler to change the disordered Imax O operation into an ordered Imax O operation.
4) the kernel must first determine how many requests are in the queue before scheduling.
2) four algorithms for Ipaw O scheduling
1) CFQ (fully fair queuing IAccord O scheduler)
Features:
In the latest kernel versions and distributions, CFQ is chosen as the default Imax O scheduler, which is also the best choice for general-purpose servers.
CFQ attempts to evenly distribute the access to the bandwidth of deadline O to avoid starvation and achieve lower latency, which is a compromise between deadline and as schedulers.
CFQ is the best choice for multimedia applications (video,audio) and desktop systems.
CFQ assigns a priority to the Icano request, which is independent of the process priority, and the read and write of the high priority process can not automatically inherit the high Icano priority.
How it works:
CFQ creates a separate queue for each process / thread to manage requests generated by that process, that is, each process has a queue, and scheduling between queues is scheduled using time slices.
This ensures that each process can be well allocated to the I / O bandwidth I / O scheduler to execute four requests for one process at a time.
2) NOOP (elevator scheduler)-suitable for SSD solid state disk.
On emerging SSDs such as SSD and Fusion IO, the simplest NOOP may be the best algorithm, because the optimization of the other three algorithms is based on shortening seek time, while SSDs have no so-called seek time and IO response time is very short.
Features:
In Linux2.4 or earlier versions of the scheduler, there was only one Imax O scheduling algorithm.
NOOP implements a simple FIFO queue, which organizes Imax O requests like the elevator master method. When a new request arrives, it merges the request after the most recent request to ensure that the request is the same medium.
NOOP tends to starve to death and is good for writing.
NOOP is the best choice for flash devices, RAM, embedded systems.
Explanation of Elevator algorithm starvation request:
Because it is easier to write a request than to read it.
Write requests go through the file system cache, and do not need to wait for a write to complete before starting the next write operation. Write requests are merged and piled into the Imando O queue.
The read request needs to wait until all previous reads are completed before it can proceed to the next read. There are milliseconds between read operations, and write requests come in between, starving the rest of the read requests.
3) Deadline (deadline scheduler)
Features:
Sorting by time and hard disk area, this classification and merge requires a scheduler similar to noop.
Deadline ensures that requests are served within a deadline, which is adjustable, while the default read period is shorter than the write deadline. This prevents write operations from starving to death because they cannot be read.
Deadline is the best choice for database environments (ORACLE RAC,MYSQL, etc.).
4) AS (expected Iram O scheduler)
Features:
It is essentially the same as Deadline, but after the last read operation, you have to wait for 6ms before you can continue to schedule other IDeadline requests.
A new read request can be booked from the application to improve the execution of read operations, but at the expense of some writes.
It inserts a new Icano operation in each 6ms and merges some small write streams into one large write stream, trading write latency for maximum write throughput.
AS is suitable for environments with more writes, such as file servers
AS performs poorly in the database environment.
3) View and setup of Icano scheduling method
1) check the Istroke O scheduling method of the current system:
[root@test1 tmp] # cat / sys/block/sda/queue/scheduler
Noop anticipatory deadline [cfq]
2) change the Istroke O scheduling method on the spot:
For example: want to change to noop elevator scheduling algorithm:
Echo deadline > / sys/block/sda/queue/scheduler
3) you want to permanently change the scheduling method of Istroke O: as follows
Modify the kernel boot parameters to add the name of the elevator= scheduler
[root@test1 tmp] # vi / boot/grub/menu.lst
Change to the following:
Kernel / boot/vmlinuz-2.6.18-8.el5 ro root=LABEL=/ elevator=deadline rhgb quiet
After restarting, check the scheduling method:
[root@test1 ~] # cat / sys/block/sda/queue/scheduler
Noop anticipatory [deadline] cfq
It's already deadline.
IV) testing of the Icano scheduler
This test is divided into read-only, write-only, read-write at the same time.
For a single file 600MB, read and write 2 M each time, a total of 300 times.
1) Test disk read:
[root@test1 tmp] # echo deadline > / sys/block/sda/queue/scheduler
[root@test1 tmp] # time dd if=/dev/sda1 f=/dev/null bs=2M count=300
300000 records in
300000 records out
629145600 bytes (629 MB) copied, 6.81189 seconds, 92.4 MB/s
Real 0m6.833s
User 0m0.001s
Sys 0m4.556s
[root@test1 tmp] # echo noop > / sys/block/sda/queue/scheduler
[root@test1 tmp] # time dd if=/dev/sda1 f=/dev/null bs=2M count=300
300000 records in
300000 records out
629145600 bytes (629 MB) copied, 6.61902 seconds, 95.1 MB/s
Real 0m6.645s
User 0m0.002s
Sys 0m4.540s
[root@test1 tmp] # echo anticipatory > / sys/block/sda/queue/scheduler
[root@test1 tmp] # time dd if=/dev/sda1 f=/dev/null bs=2M count=300
300000 records in
300000 records out
629145600 bytes (629 MB) copied, 8.00389 seconds, 78.6 MB/s
Real 0m8.021s
User 0m0.002s
Sys 0m4.586s
[root@test1 tmp] # echo cfq > / sys/block/sda/queue/scheduler
[root@test1 tmp] # time dd if=/dev/sda1 f=/dev/null bs=2M count=300
300000 records in
300000 records out
629145600 bytes (629 MB) copied, 29.8 seconds, 21.1MB/s
Real 0m29.826s
User 0m0.002s
Sys 0m28.606s
Results:
First noop: took 6.61902 seconds and the speed is 95.1MB/s
Second deadline: took 6.81189 seconds and the speed is 92.4MB/s
Third anticipatory: it took 8.00389 seconds and the speed was 78.6MB/s
Fourth cfq: 29.8s at 21.1MB/s speed
2) Test the write disk:
[root@test1 tmp] # echo cfq > / sys/block/sda/queue/scheduler
[root@test1 tmp] # time dd if=/dev/zero f=/tmp/test bs=2M count=300
300000 records in
300000 records out
629145600 bytes (629 MB) copied, 6.93058 seconds, 90.8 MB/s
Real 0m7.002s
User 0m0.001s
Sys 0m3.525s
[root@test1 tmp] # echo anticipatory > / sys/block/sda/queue/scheduler
[root@test1 tmp] # time dd if=/dev/zero f=/tmp/test bs=2M count=300
300000 records in
300000 records out
629145600 bytes (629 MB) copied, 6.79441 seconds, 92.6 MB/s
Real 0m6.964s
User 0m0.003s
Sys 0m3.489s
[root@test1 tmp] # echo noop > / sys/block/sda/queue/scheduler
[root@test1 tmp] # time dd if=/dev/zero f=/tmp/test bs=2M count=300
300000 records in
300000 records out
629145600 bytes (629 MB) copied, 9.49418 seconds, 66.3 MB/s
Real 0m9.855s
User 0m0.002s
Sys 0m4.075s
[root@test1 tmp] # echo deadline > / sys/block/sda/queue/scheduler
[root@test1 tmp] # time dd if=/dev/zero f=/tmp/test bs=2M count=300
300000 records in
300000 records out
629145600 bytes (629 MB) copied, 6.84128 seconds, 92.0 MB/s
Real 0m6.937s
User 0m0.002s
Sys 0m3.447s
Test results:
The first anticipatory took 6.79441 seconds and the speed was 92.6MB/s.
The second deadline took 6.84128 seconds and the speed was 92.0MB/s.
The third cfq took 6.93058 seconds and the speed was 90.8MB/s.
The fourth noop took 9.49418 seconds and the speed was 66.3MB/s.
3) Test simultaneous read / write
[root@test1 tmp] # echo deadline > / sys/block/sda/queue/scheduler
[root@test1 tmp] # dd if=/dev/sda1 f=/tmp/test bs=2M count=300
300000 records in
300000 records out
629145600 bytes (629 MB) copied, 15.1331 seconds, 41.6 MB/s
[root@test1 tmp] # echo cfq > / sys/block/sda/queue/scheduler
[root@test1 tmp] # dd if=/dev/sda1 f=/tmp/test bs=2M count=300
300000 records in
300000 records out
629145600 bytes (629 MB) copied, 36.9544 seconds, 17.0 MB/s
[root@test1 tmp] # echo anticipatory > / sys/block/sda/queue/scheduler
[root@test1 tmp] # dd if=/dev/sda1 f=/tmp/test bs=2M count=300
300000 records in
300000 records out
629145600 bytes (MB) copied, 23.3617 seconds, 26.9 MB/s
[root@test1 tmp] # echo noop > / sys/block/sda/queue/scheduler
[root@test1 tmp] # dd if=/dev/sda1 f=/tmp/test bs=2M count=300
300000 records in
300000 records out
629145600 bytes (629 MB) copied, 17.508 seconds, 35.9 MB/s
Test results:
The first deadline took 15.1331 seconds and the speed was 41.6MB/s.
The second noop took 17.508 seconds and the speed was 35.9MB/s.
The third anticipatory took 23.3617 seconds and the speed was 26.9MS/s.
The fourth cfq took 36.9544 seconds and the speed was 17.0MB/s.
V) ionice
Ionice can change the type and priority of tasks, but only cfq schedulers can use ionice.
There are three examples of the functionality of ionice:
Real-time scheduling with cfq, priority 7
Ionice-C1-N7-ptime dd if=/dev/sda1 f=/tmp/test bs=2M count=300&
Use the default disk Ihop O schedule with a priority of 3
Ionice-c2-n3-ptime dd if=/dev/sda1 f=/tmp/test bs=2M count=300&
Use idle disk scheduling with a priority of 0
Ionice-c3-n0-ptime dd if=/dev/sda1 f=/tmp/test bs=2M count=300&
Among the three scheduling methods of ionice, the real-time scheduling is the highest, followed by the default Imax O scheduling, and finally the idle disk scheduling.
Ionice has 8 disk scheduling priorities, the highest being 0 and the lowest 7. 0.
Note that the priority of disk scheduling has nothing to do with the priority of the process nice. One is for the priority of the process CPU O, and the other is for the priority of the process.
Summary:
1. The focus of CFQ and DEADLINE consideration is on satisfying sporadic IO requests. Sequential IO requests, such as sequential reads, are not optimized. In order to meet the mixed scenarios of random IO and sequential IO, Linux also supports ANTICIPATORY scheduling algorithm. On the basis of DEADLINE, ANTICIPATORY sets the wait time window of 6ms for each read IO. If the OS receives a read IO request from an adjacent location in this 6ms, it can be satisfied immediately.
The choice of IO scheduler algorithm depends not only on the hardware characteristics, but also on the application scenario.
On the traditional SAS disk, CFQ, DEADLINE, ANTICIPATORY are all good choices; for the dedicated database server, DEADLINE has good throughput and response time.
However, on emerging SSDs such as SSD and Fusion IO, the simplest NOOP may be the best algorithm, because the optimization of the other three algorithms is based on shortening seek time, while SSDs have no so-called seek time and IO response time is very short.
2. For database applications, Anticipatory Scheduler's performance is the worst. Deadline performs a little better than cfq in a DSS environment, while cfq performs better overall. No wonder RHEL's default IO scheduler is set to cfq.
This is what happens when the disk IO is too high in the windows system shared by the editor. If you happen to have similar doubts, please refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.