In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
1. problem
The docker container log caused the host to run out of disk space. Docker logs-f container_name crackles a lot, takes up a lot of space, and unused logs can be cleaned up.
two。 Solution method
2.1 find the Docker container log
On linux, the container log is generally stored under / var/lib/docker/containers/container_id/, and the file ending with json.log (business log) is very large. View the script docker_log_size.sh of each log file size, as follows:
#! / bin/shecho "= docker containers logs file size =" logs=$ (find / var/lib/docker/containers/-name *-json.log) for log in $logs do ls-lh $log done # chmod + x docker_log_size.sh#. / docker_log_size.sh
2.2 Clean Docker container logs (palliative)
If the docker container is running, after deleting the log using rm-rf, you will find that the disk space has not been freed through df-h. The reason is that on Linux or Unix systems, deleting files through rm-rf or the file manager will unlink (unlink) from the directory structure of the file system. If the file is open (there is a process in use), the process will still be able to read the file and disk space has been occupied. The correct pose is cat / dev/null > *-json.log. Of course, you can also delete it and restart docker via rm-rf. Next, provide a log cleanup script clean_docker_log.sh, which reads as follows:
#! / bin/sh echo "= start clean docker containers logs=" logs=$ (find / var/lib/docker/containers/-name *-json.log) for log in $logs do echo "clean logs: $log" cat / dev/null > $log done echo "= end clean docker containers logs=" # chmod + x clean_docker_log.sh#. / clean_docker_log.sh
However, after this cleanup, over time, the container log will make a comeback like a weed.
2.3 set Docker container log size (root cause)
Set the upper limit of log size for a CCS
With the above methods, the log files will rise back sooner or later. To solve the problem fundamentally, you need to limit the upper limit of the log size of CCS. This is achieved by configuring the max-size option of the container docker-compose
Nginx: image: nginx:1.12.1 restart: always logging: driver: "json-file" options: max-size: "5g"
After restarting the nginx container, the size of its log files is limited to 5GB, so you don't have to worry anymore.
Global Settin
New / etc/docker/daemon.json, if there is no need to create a new one. Add log-dirver and log-opts parameters, as shown in the following example:
# vim / etc/docker/daemon.json {"registry-mirrors": ["http://f613ce8f.m.daocloud.io"]," log-driver ":" json-file "," log-opts ": {" max-size ":" 500m "," max-file ":" 3 "}}
Max-size=500m, which means that the maximum size of a container log is 500m.
Max-file=3, which means that a container has three logs, namely id+.json, id+1.json, and id+2.json.
/ / restart the docker daemon # systemctl daemon-reload# systemctl restart docker
Note: the log size set is valid only for newly created containers.
The above is the whole content of this article, I hope it will be helpful to your study, and I also hope that you will support it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.