In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces the example analysis of junk files under Linux, which is very detailed and has certain reference value. Friends who are interested must finish reading it!
After the Linux computer is installed, in our continuous use process, because of adding and deleting software and surfing the Internet, debugging programs and other behaviors, the hard disk will produce a variety of junk files, and with the continuous expansion of these junk files, they will not only engulf our precious hard disk space, but also drag down the running speed of the machine and affect our work efficiency. This article introduces how to lose weight and how to use the tools for Linux system. The Linux distribution used in this article is Ubuntu 12.04. The tools introduced in this article include: Activity Log Manager, BleachBit, find, fdupes, Geeqie, GConf Cleaner, these tools are open source tools, Linux users can download compiled source code to be used on other popular Linux versions (such as Redhat, SUSE, etc.).
Which files under Linux are junk files?
Temporary files generated during software installation
When installing many bin-formatted software, you first extract your installation files to a temporary directory (usually / tmp directory) and then install them. If the software design is negligent or there is something wrong with the system, when the installation is finished, these temporary files will not be "temporary" and become a pile of junk on the hard disk, often appearing as the face of *. Tmp.
Temporary files generated during the operation of the software
Like the installation process, some temporary exchange files are usually generated during the operation of the software, and some software runs with hundreds of megabytes of garbage, such as files generated during the connection between the ssh server and the client. There are also files generated when the virtual machine is running. In addition, after we delete an account, there will be some useless junk files and directories.
Temporary documents generated by surfing the Internet
When we surf the Internet, browsers always download web page files to the local machine. These cache files not only take up valuable disk space, but also often disclose our personal privacy to the public.
Some uncommonly used chicken rib files
For example, some applications come with help and system manual pages and so on. Since the food is tasteless, it simply regards them as a manifestation of junk documents. It takes up a lot of space, which will seriously drag down the running speed of the system and some graphics processing software. There are also some corrupt desktop files, including corrupted application menu buttons and file associations.
Various cache files
If Linux users install and use graphic editing tools such as GIMP or Geeqie, there is an image preview function in these software. in the folder where the pictures are saved, a file called "Thumbs.db" will be generated, which will expand as the image file increases. Then there is the .DS _ Store file, and the DS_Store file is used to store the display properties of this folder, such as the location of the file icon. The side effect after deletion is the loss of this information. There are also cache files generated during installation using apt or yum packages.
Delete junk files by using the Linux command to delete junk files left behind by users
The main command we use here is find. For example: after we delete an account, there will still be some useless junk files and directories. To find out the junk belonging to this user, use the find /-user user name: you can find the relevant files belonging to this user in the root directory, and use the command:
# find. /-nouser | xargs rm-rf
Core file
When we make an error when we run a program under the system, the system will automatically save the data left in memory as core files. Over time, more and more core will be left behind in the system, which is annoying to be scattered in every corner of the system like dust. At this point, we can use the find command with the-exec parameter to clean them up. Use the command:
# find /-name core-print-exec rm-rf {}
Extra man pages
In addition, Linux provides man pages in many languages (man). For example, the man pages of Ubuntu are located in the / usr/share/man directory. You can use the command to delete the man pages of redundant languages in both Chinese and English.
# cd / usr/share/man # find. /-maxdepth 1-type d | tail-n + 2 | grep-E-v'(en | zh | man). *'| while read d; do rm-rf $d; done
Description: the keyword is (en | zh | man), which you can modify according to your own situation.
Use the fdupes tool to delete duplicate files in the specified directory
Fdupes is a command-line tool that finds and deletes duplicate files in a specified directory, comparing the file size with the MD5 value. Compare byte by byte. Install the tool first
# apt-get install fdupes
Look for duplicate files in the / etc directory, using the following command:
# fdupes / etc
You can use it in combination with the Linux command to delete files:
# fdupes-r-f. | | grep-v ^ $| tee duplicate.txt cat duplicate.txt | while read file; do rm-v "$file"; done |
You can also use it in combination with the sed command to delete files:
# fdupes-r-n-S / tmp | sed-r "s / ^ / # rm" / "| sed-r" sbinder / "/" > duplicate-files.sh
Delete cache file
Clean the old version of the software cache
# apt-get autoclean
Clean up all software caches:
# apt-get clean
Use the Geeqie tool to find similar image files
Now the capacity of the hard disk is getting larger and larger, for the sake of backup, many friends will adopt the principle of "overuse rather than lack" to save pictures, which may lead to duplicate files, resulting in low efficiency of file search. It is not an easy task to find duplicate documents in a large amount of storage space. In fact, with the help of the famous image browsing tool Geeqie, we can easily find out the duplicate and similar image files in the system. These files are larger than text files and take up a lot of hard disk space over time. It is not possible to use the fdupes tool described above, because the fdupes tool can only delete duplicate files that are exactly the same (the same md5sum), but to remove "similar" picture files, you can use the geeqie tool. Install the tool first:
# apt-get install gqview
Run the tool next, press the right mouse button on the directory you want to search, and select "Find duplicates recursive..." See figure 1.
Select the "Compare by:" drop-down menu in the lower left corner to choose the comparison method Similarity (custom) to find pictures with more than 99% similarity, and you can check "Thumbnails" to display thumbnails as shown in figure 2.
Below, press the right mouse button on the selected item and click "Delete" to delete all the selected pictures. There will be a confirmation interface before deletion to avoid erroneous deletion. Description: 99% is the default value of image similarity. Users can modify the parameters in the following ways: in Edit → Preferences → Preferences... → Behavior → Miscellaneous: Custom similarity threshold see figure 3.
Introduction to using BleachBit to clean up files
BleachBit is an open source and free system cleaning tool with functions similar to CCleaner on the Windows platform. BleachBit can delete hidden junk files and simply protect your privacy. Erase cache, delete cookies files, clear Internet browsing history, delete unused localized fragment logs, delete temporary files, is a very practical cross-platform system cleaning tool. BleachBit provides rpm and deb binary packages, which are suitable for Linux distributions such as Fedora/CentOS/RHEL and Debian/Ubuntu. Other Linux users can choose the source package of BleachBit (download address: http://bleachbit.sourceforge.net/download.PHP). With BleachBit, you can clean up your system's cache, history, temporary files, cookies, and other unnecessary things, thus freeing up your disk space. At present, BleachBit can clean up junk files generated by more than 70 software, such as Beagle, Firefox, Epiphany, Flash, OpenOffice.org, KDE, GIMP, JAVA programming tools, vim, Gedit editors and so on. There are also "Thumbs.db" files generated when the system is running and cache files generated during installation using apt or YUM software packages, as well as historical file information in the clipboard.
First install the software:
# apt-get install bleachbit
After installation, there will be two more tools in the system tools menu: bleachbit and bleachbit as root. For root users using the second, the software runs for the first time, the "preferences window" pops up, as shown in figure 4.
Briefly explain the settings interface: including customized files and folders, drive list, language, whitelist (free from cleaning) settings, and whether to boot BleachBit and other options.
Let's take a look at the work interface as shown in figure 5.
BleachBit software has a single function, so it is easy to use. I can see it from figure 5. The left side of the software lists all kinds of garbage that can be cleaned up. Click the "Preview" button to analyze the details and size of the junk files. Check the software and press the clean button.
Taking Chrome browser as an example to introduce an operation example
The main files that can be cleaned by Chrome browsers include:
Cache: delete those page buffer files (these buffer files can reduce the time of the next visit to the page) Cookies: delete cookies files They save information such as site preferences, authentication and identity current session: delete current session DOM (document object model) storage: delete HTML5 cookies form history: site form input history: delete browsed site, download and thumbnail history search engine: reset search engine usage history and delete non-built-in search engine Some of these engines defragment the database for automatic addition: clean up the database fragmentation to reduce space and increase speed (no need to delete any data)
First look at the cleanable Chrome browser file as shown in figure 6.
After selecting the project you want to clean, click the "Preview" button to scan the junk files contained in this kind of project, the scanning process is very fast, and the scanning process is over, the user will see the list of detected junk files and their statistical information. below, users only need to click the "clean" button to easily remove these detected junk files.
The above author chooses to perform the scanning and cleaning task of the Chrome browser. Of course, you can select all the items in the garbage list. Users only need to check the scan items one by one to select the active items.
The above is all the contents of the article "sample Analysis of junk Files under Linux". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.