In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
Today, I will talk to you about how to use shell scripts to quickly locate logs, which may not be well understood by many people. In order to make you understand better, the editor has summarized the following content for you. I hope you can get something according to this article.
We usually check the log, in the test environment, there are only a few log files, we can find the time close to the file and then locate the wrong location according to the keyword, at least check it all, which can be tolerated. But in the actual production environment, server cluster deployment, daily log is very many, each machine will have dozens or even as many as hundreds, hundreds of log files. When you need to query the log when you encounter a problem, you will find that if you look it up one by one, you will collapse. Because this is a very time-consuming and repetitive work, and it may not be able to find out.
To solve this problem, most people write shell scripts to retrieve log files, which can filter out a lot of useless files and reduce the workload of checking logs. The same is true for server cluster deployments, where we can distribute a script file in the same directory on each machine, and then execute scripts on all remote sessions through xshell, so that all connected machines can execute scripts.
The premise for us to quickly locate the problem is to first locate which files the user's operation record information is in, and then find the stack information of the error in these files for analysis, so as to find the cause of the error.
Narrowing the scope of error reports is a prerequisite for log checking to improve efficiency.
The following code is a simple shell script for filtering log files. If you know shell programming, you can modify it and add the features you need.
#! / bin/bash# date $1 keyword $2datedates, words, words
# Base directory of log files base_path=/home/logs/application/# base directory + specified directory files_path= "$base_path$date/"
# determine whether there are files in the specified directory (ls $file_path | grep "^ -" | wc-l) # the number of files is 0, exit the script if [$f_count-eq 0] thenecho "there are no files in the directory" exitfi#, otherwise continue to execute all files in the # directory files=$ (ls $files_path) # after the flag bit finds the relevant log Delete the previous deletion under tmp and delete the full path of the flag=0echo "start" for file in $filesdoecho "find in $file" # file only once. Count=$ (grep-c $key_word $file) # the number of Word in the statistics file count=$ (grep-c $key_word $fallowp) # print contains key_wordcat $fipper | grep $key_wordif [$count-gt 0] then# is found and tmp is cleared of if [$flag-eq 0] thenrm-f tmp/*fi# flag position 1 Clear that only one flag=1# copy of the file containing keywords is executed to cp $flip tmp/$filefidoneecho "end" under tmp.
Where base_path is the root directory of log files, assuming that our logs are all under / home/logs/application/, generate a day-named folder for the current day's log files, 2019-04-18, and cut multiple log files indexed in addition to a fixed prefix.
For example, our log file is named like this:
/ home/logs/application/2019-04-18/application_20190418_0.log/home/logs/application/2019-04-18/application_20190418_1.log/home/logs/application/2019-04-18/application_20190418_2.log.../home/logs/application/2019-04-18/application_20190418_80.log
If the server has generated 80 files when querying the log as of today, if you want to check the work order feedback submitted by a user, because the user's mobile phone number will be printed in our log, we can check it like this:
Sh find.sh 2019-04-18 18300000000
In this way, the script will copy all the log files recorded by the user 18300000000 to the tmp directory (the same directory as the shell script), and then we will operate on the logs under tmp.
In the case of too many logs, the log may be packaged and compressed, we can change the script, one more step, first decompress the compressed and then find the operation.
Some companies will have more advanced practices, such as elk log analysis platform, we check logs on elk more convenient, visual interface, more selective, multi-query conditions and so on.
After reading the above, do you have any further understanding of how to use shell scripts to quickly locate logs? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.