In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces the knowledge of "the fastest way for Linux to delete a million files at a time". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Environment:
CPU: Intel (R) Core (TM) 2 Duo CPU E8400 @ 3.00GHz
MEM: 4G
HD: ST3250318AS: 250G/7200RPM
With-delete and-exclude, you can selectively delete files that meet the criteria. Another thing is that this method is perfect when you need to keep this directory for other uses.
Re-evaluation
A few days ago, Keith-Winstein replied to this post on Quora that my previous assessment could not be copied because the operation lasted too long. Let me clarify that these data are too large, probably because my computer has done too much in the past few years, and there may be some file system errors in the evaluation. But I'm not sure it's these reasons. Now, I've been working on a relatively new computer all day, and I'll do the test again. This time I use / usr/bin/time, which provides more detailed information. Here are the new results.
(1000000 files at a time. The volume of each file is 0.)
Original output
# method 1~/test $/ usr/bin/time-v rsync-a-- delete empty/ a / Command being timed: "rsync-a-- delete empty/ a /" User time (seconds): 1.31 System time (seconds): 10.60 Percent of CPU this job got: 95% Elapsed (wall clock) time (h:mm:ss or m:ss): 0Vera 12.42 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 0 Average resident set size (kbytes): 0 Major (requiring I requiring O) page faults: 0 Minor (reclaiming a frame) page faults: 24378 Voluntary context switches: 106 Involuntary context switches: 22 Swaps: 0 File system inputs: 0 File System outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: method 2 Command being timed: "find b /-type f-delete" User time (seconds): 0.41 System time (seconds): 14.46 Percent of CPU this job got: 52 Elapsed (wall clock) time (h:mm:ss or) M:ss): 0Average resident set size 28.51 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 0 Average resident set size (kbytes): 0 Major (requiring I Average resident set size O) page faults: 0 Minor (reclaiming a frame) page faults: 11749 Voluntary context switches: 14849 Involuntary context Switches: 11 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: method 3find c /-type f | xargs-L 100 rm~/test $/ usr/bin/time-v. / delete.sh Command being timed: ". / delete.sh" User time (seconds) 2.06 System time (seconds): 20.60 Percent of CPU this job got: 54% Elapsed (wall clock) time (h:mm:ss or m:ss): 0Elapsed 41.69 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 0 Average resident set size (kbytes): 0 Major (requiring I File system outputs O) page faults: 0 Minor (reclaiming a frame) page faults: 1764225 Voluntary context switches: 37048 Involuntary context switches: 15074 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: method 4find d /-type f | Xargs-L 100-P 100 rm~/test $/ usr/bin/time-v. / delete.sh Command being timed: ". / delete.sh" User time (seconds): 2.86 System time (seconds): 27.82 Percent of CPU this job got: 89% Elapsed (wall clock) time (h:mm:ss or m:ss): 0 wall clock 34.32 Average shared text size (kbytes): 0 Average unshared Data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 0 Average resident set size (kbytes): 0 Major (requiring I requiring O) page faults: 0 Minor (reclaiming a frame) page faults: 1764278 Voluntary context switches: 929897 Involuntary context switches: 21720 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: method 5~/test $/ usr/bin/time-v rm-rf f Command being timed: "rm-rf f" User time (seconds): 0.20 System time (seconds): 14.80 Percent of CPU this job got: 47 Elapsed (wall clock) time (h: Mm:ss or m:ss): 0Average resident set size 31.29 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 0 Average resident set size (kbytes): 0 Major (requiring I Average resident set size O) page faults: 0 Minor (reclaiming a frame) page faults: 176 Voluntary context switches: 15134 Involuntary context switches: 11 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0
I really wonder why Lee's method is faster than others, even faster than rm-rf.
This is the quickest way for Linux to delete a million files at a time. Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.