In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Today, I will talk to you about how the Linux server IO high processing process is, and many people may not know much about it. In order to let everyone know more, Xiaobian summarizes the following contents for everyone. I hope you can gain something according to this article.
I. Background
After an online upgrade, I found that the IOwait of the two tomcat servers has always exceeded 100ms, and even exceeded 300ms at peak times. Checking the servers found that CPU load and memory usage were not high. The problem may occur in hard disk reading and writing, and that hard disk has no other IO operations except writing logs. Finally, it was discovered that the application printed too much log information, resulting in too high a disk IO load.
II. The search for solutions
By looking up the data found that Linux is using pdflush process to write data from the cache page to the hard disk, then by modifying some parameters of pdflush should be able to improve the IO load problem.
pdflush behavior is controlled by parameters in/proc/sys/vm
pdflush writes to the hard disk and looks at two parameters:
1 Whether the data exceeds 30 seconds in the page cache, if yes, mark it as dirty page cache and write it to disk;
2 Whether dirty page cache reaches 10% of working memory;
The first thing pdflush does is read
/proc/sys/vm/dirty_expire_centiseconds(default 3000)
After declaring that the data in the Linux kernel write buffer is too "old," the pdflush process starts thinking about writing to disk. The unit is 1/100th of a second. The default is 30000, which means 30 seconds of data is considered old and the disk will be flushed. For particularly overloaded writes, it is good to shrink this value appropriately, but not too much, because shrinking too much will also cause IO to increase too quickly.
Of course, if your system memory is relatively large, and the write mode is intermittent, and the data written at a time is not large (such as tens of megabytes), then this value is still larger.
The second thing is to determine whether the memory has reached the limit to be written to the hard disk, determined by the parameters:
/proc/sys/vm/dirty_ratio (default 20)
Controls the size of the file system's write buffer, expressed as a percentage of system memory, indicating that when the write buffer is used up to how much system memory, it starts writing data to disk. This increases the use of more system memory for disk write buffering and can also greatly improve system write performance. However, when you need constant, constant writing, you should lower the value.
/proc/sys/vm/dirty_background_ratio(default 10)
Controls the pdflush process of the file system and when to flush disks. The unit is a percentage, which represents the percentage of system memory that retains the maximum value of the stale page cache (dirty page cache). MmeFree+Cached-Mapped is used as a benchmark, and when the maximum value is exceeded, the cache page is written to disk. pdflush is used to synchronize the contents of memory with the file system, for example, when a file is modified in memory,pdflush is responsible for writing it back to the hard disk. Whenever there are more than 10% dirty pages in memory,pdflush backs them up to disk. This increases the use of more system memory for disk write buffering and can also greatly improve system write performance. However, when you need constant, constant writing, you should lower the value:
/proc/sys/vm/dirty_writeback_centisecs(default 500)
Controls the interval between runs of the kernel's dirty data flush process pdflush. The unit is 1/100th of a second. The default value is 500, or 5 seconds. If your system is constantly writing, it may actually be better to lower this number to smooth out spikes into multiple writes.
For systems with high write operation
dirty_background_ratio: Primary adjustment parameter. Lower this value if you need to write cache continuously rather than in bulk.
dirty_ratio: second adjustment parameter.
If there are a lot of write operations, to avoid long I/O wait times, you can set:
$ echo 5 >/proc/sys/vm/dirty_background_ratio$ echo 10 > /proc/sys/vm/dirty_ratio
File system data buffering also requires frequent memory allocations. Increasing the value of reserved memory improves system speed and stability. Less than 8 gigabytes of memory, 64 megabytes of reserved memory, 256 megabytes for more than 8 gigabytes
$ echo 65536 >/proc/sys/vm/min_free_kbytes III. Final Solution
The requested URL/proc/sys/vm/dirty_expire_centuries (default 3000) was not found on this server.
The default is 30 seconds, which is a bit long, so I changed it to 10 seconds.
echo 1000 >/proc/sys/vm/dirty_expire_centisecs
After modification, IOwait time immediately decreased, averaging 40ms~50ms, 1/3~1/4 of the original
The long-standing IO problem has been solved!
Here is a complete list of my modification parameters:
echo 5 > /proc/sys/vm/dirty_ratioecho 2 >/proc/sys/vm/dirty_background_ratioecho 100 >/proc/sys/vm/dirty_writeback_centisececho 262144 >/proc/sys/vm/min_free_kbyteecho 1000 >/proc/sys/vm/dirty_expire_centisecs After reading the above contents, do you have any further understanding of how the Linux server IO high handling process is? If you still want to know more knowledge or related content, please pay attention to the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.