In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
1. Docker startup abnormal performance:
1. Status repeatedly restaring, use command to view
$docker ps -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES21c09be88c11 docker.xxxx.cn:5000/xxx-tes/xxx_tes:1.0.6 "/usr/local/tomcat... " 9 days ago Restarting (1) Less than a second ago xxx10
Docker logs have obvious problems:
$docker logs [container name/container ID]
Possible causes of Docker startup anomalies:
2.1. not enough memory
Docker requires at least 2G of memory to start. First, execute the free -mh command to see if the remaining memory is enough.
View memory directly
$free -mh total used free shared buff/cache availableMem: 15G 14G 627M 195M 636M 726MSwap: 0B 0B 0B
analysis log
Sometimes a moment of memory overload overflow, causing some processes to be killed, it seems that memory is enough, in fact docker will restart repeatedly, you need to further analyze through docker logs and system log information:
Analyze docker logs
Check docker log to see memory overflow information, you have to read carefully to find the information, not at the bottom
$docker logs [container name/container ID]| less Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000769990000, 1449590784, 0) failed; error='Cannot allocate memory' (errno=12)## There is insufficient memory for the Java Runtime Environment to continue.# Native memory allocation (malloc) failed to allocate 1449590784 bytes for committing reserved memory.# An error report file with more information is saved as:# //hs_err_pid1.logJava HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000769990000, 1449590784, 0) failed; error='Cannot allocate memory' (errno=12)## There is insufficient memory for the Java Runtime Environment to continue.# Native memory allocation (malloc) failed to allocate 1449590784 bytes for committing reserved memory.# An error report file with more information is saved as:# /tmp/hs_err_pid1.logJava HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000769990000, 1449590784, 0) failed; error='Cannot allocate memory' (errno=12)## There is insufficient memory for the Java Runtime Environment to continue.# Native memory allocation (malloc) failed to allocate 1449590784 bytes for committing reserved memory.# Can not save log file, dump to screen..## There is insufficient memory for the Java Runtime Environment to continue.# Native memory allocation (malloc) failed to allocate 1449590784 bytes for committing reserved memory.# Possible reasons:# The system is out of physical RAM or swap space# In 32 bit mode, the process size limit was hit# Possible solutions:# Reduce memory load on the system# Increase physical memory or swap space# Check if swap backing store is full# Use 64 bit Java on a 64 bit OS# Decrease Java heap size (-Xmx/-Xms)# Decrease number of Java threads# Decrease Java thread stack sizes (-Xss)# Set larger code cache with -XX:ReservedCodeCacheSize=# This output file may be truncated or incomplete.## Out of Memory Error (os_linux.cpp:2756), pid=1, tid=140325689620224## JRE version: (7.0_79-b15) (build )# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.79-b02 mixed mode linux-amd64 compressed oops)# Core dump written. Default location: //core or core.1#
Analyze system logs
Check the system log and find a large number of records of processes being killed due to memory overflow
$grep -i 'Out of Memory' /var/log/messagesApr 7 10:04:02 centos106 kernel: Out of memory: Kill process 1192 (java) score 54 or sacrifice childApr 7 10:08:00 centos106 kernel: Out of memory: Kill process 2301 (java) score 54 or sacrifice childApr 7 10:09:59 centos106 kernel: Out of memory: Kill process 28145 (java) score 52 or sacrifice childApr 7 10:20:40 centos106 kernel: Out of memory: Kill process 2976 (java) score 54 or sacrifice childApr 7 10:21:08 centos106 kernel: Out of memory: Kill process 3577 (java) score 47 or sacrifice childApr 7 10:21:08 centos106 kernel: Out of memory: Kill process 3631 (java) score 47 or sacrifice childApr 7 10:21:08 centos106 kernel: Out of memory: Kill process 3634 (java) score 47 or sacrifice childApr 7 10:21:08 centos106 kernel: Out of memory: Kill process 3640 (java) score 47 or sacrifice childApr 7 10:21:08 centos106 kernel: Out of memory: Kill process 3654 (java) score 47 or sacrifice childApr 7 10:27:27 centos106 kernel: Out of memory: Kill process 6998 (java) score 51 or sacrifice childApr 7 10:27:28 centos106 kernel: Out of memory: Kill process 7027 (java) score 52 or sacrifice childApr 7 10:28:10 centos106 kernel: Out of memory: Kill process 7571 (java) score 42 or sacrifice childApr 7 10:28:10 centos106 kernel: Out of memory: Kill process 7586 (java) score 42 or sacrifice child
2.2. port conflict
The docker listening port has been occupied by other processes. Generally, this problem is easy to occur in newly deployed services or new background services deployed on the original machine. Therefore, before deployment, you should execute the command to check whether the port has been occupied. If it is found to be occupied after going online, it should be changed to an available port and restarted.
Check command: $netstat -nltp| grep [planned port number]
III. Countermeasures
3.1. Not enough memory:
Countermeasure 1:
3.1.1 The minion of saltstack may take up a lot of memory after running too long, and it needs to be restarted. Restart commands may sometimes not work. Mainly check the running state, if not successfully stopped, restart;
Countermeasure 2:
3.2.2 ELK log collection program or other java processes take up too much, use top and ps commands to investigate, carefully determine the role of the process, and stop the relevant process under the condition that it does not affect the business;
Countermeasure 3:
Free occupied memory (buff/cache):
$sync #Write memory data to disk
$echo 3 > /proc/sys/vm/drop_caches #Free occupied memory
Countermeasure 4:
Sometimes it is not that buff/cache is too high to cause insufficient memory, it is indeed consumed by many necessary processes, which needs to be considered and solved from the level of machine resource allocation.
3.2 Countermeasures for Port Conflict
Countermeasure 1:
Generally, such problems are easy to occur in newly deployed services or new background services deployed on the original machine. Therefore, before deployment, you should execute a command to check whether the port has been occupied. If it is found to be occupied after going online, it should be changed to an available port and restarted.
Check command: $netstat -nltp| grep [planned port number]
The above is all the content of this article, I hope to help everyone's study, but also hope that everyone a lot of support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 251
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.