In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "Nginx log and performance troubleshooting example analysis". The explanation content in this article is simple and clear, easy to learn and understand. Please follow the idea of Xiaobian and go deep into it slowly to study and learn "Nginx log and performance troubleshooting example analysis" together!
introduction
Recently, I have been doing performance troubleshooting. The idea is to analyze nginx logs to get the URL of response time and request time, and then get the number of requests during this period of time. The analysis is the cause of concurrency, or is it slow? If it is the application itself, just find the corresponding code and optimize it.
I found several reasons, basically is the back-end sql running more, a single visit can not see, but more people when it is slower, less people when 20-200 milliseconds, more people, 200-6000 milliseconds, basically maintained in tens of milliseconds after optimization, optimization strategy is to reduce unnecessary sql, plus cache, basically solved the problem of stuck, by the way a series of commands used to record, as a summary
If you need to get the request processing time, you need to add $request_time to the nginx log. Here is my log_format.
nginx.conf
log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent $request_body "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" "$request_time"';
Restart nginx after modification. When you view the nginx log, you can see the time it takes nginx to process the request. This time is basically the time it takes the backend, so you can get the slow response request according to this field.
Here are some of the commands I used.
Get PV Number
$ cat /usr/local/nginx/logs/access.log | wc -l
Get ip number
$ cat /usr/local/nginx/logs/access.log | awk '{print $1}' | sort -k1 -r | uniq | wc -l
Get the most time-consuming request time, url, time-consuming, top 10, you can modify the number behind to get more, do not add to get all
$ cat /usr/local/class/logs/access.log | awk '{print $4,$7,$NF}' | awk -F '"' '{print $1,$2,$3}' | sort -k3 -rn | head -10
To get the number of requests at a time, you can subtract seconds to get minutes, minutes to get hours, and so on
$ cat /usr/local/class/logs/access.log | grep 2017:13:28:55 | wc -l
Get the number of requests per minute, output to csv file, and then open with excel, you can generate histogram
$ cat /usr/local/class/logs/access.log | awk '{print substr($4,14,5)}' | uniq -c | awk '{print $2","$1}' > access.csv
The above figure is generated with excel, you can also use the command line tool gnuplot to generate png, I also tried it, no problem, directly in the form of programming to get reports, remove the manual operation part, very convenient, but there is one point is that x-axis data is more, can not automatically dilute the data like excel, so I still like to use excel to generate
In fact, there are only a few commands to use:
cat: Input file content
grep: filter text
'sort': sort
'uniq': de-weighting
'awk': text processing
Command combination use, a single command can be used multiple times to achieve the effect of multiple filtering, the output of the previous command is the input of the latter command, streaming processing, as long as you learn this command, there are many seemingly complex things, become unusually simple.
The above are all commands, and then introduce a direct output html.
Use go-access to analyze nginx logs
cat /usr/local/nginx/logs/access.log | docker run --rm -i diyan/goaccess --time-format='%H:%M:%S' --date-format='%d/%b/%Y' --log-format='%h %^[%d:%t %^] "%r" %s %b "%R" "%u"' > index.html
go-access is run in the form of docker container, as long as you install docker, you can run directly, installation free is very convenient
The above script, with daily log segmentation, and then configure the automatic running script in crontab, can generate nginx reports every day, the website situation is clear, of course, there are shortcomings here, because it is not real-time
If you want to count real-time data, you can use ngxtop to view it, and it's easy to install.
$ pip install ngxtop
To run, first go to the nginx directory, and then run, -c specifies the configuration file, -t refresh frequency, in seconds
$ cd /usr/local/nginx$ ngxtop -c conf/nginx.conf -t 1
But this real-time way, also need ssh remote login, not very convenient, you can also use lua to carry out real-time statistics, and then write an interface to display the data, through lua-nginx-module, nginx/tengine can be used, if you install openrest directly, then it is convenient, embedded lua, do not need to recompile nginx.
Thank you for reading, the above is the content of "Nginx log and performance troubleshooting example analysis", after the study of this article, I believe that everyone has a deeper understanding of Nginx log and performance troubleshooting example analysis, the specific use of the situation also needs to be verified by practice. Here is, Xiaobian will push more articles related to knowledge points for everyone, welcome to pay attention!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.