Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the analysis script for Nginx Httpcode?

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Today, I would like to talk to you about what the analysis script of Nginx Httpcode is, many people may not know much about it. In order to make you understand better, the editor summarizes the following content for you. I hope you can get something according to this article.

In the past, when doing CDN operation and maintenance, because of the particularity of the business (across computer rooms, across ISP, across regions), it was not realistic to transfer logs centrally to a center for qos analysis, so the method adopted was to cut Nginx logs every 5 minutes, then calculate the distribution of http code through Python programs, and realize the monitoring of single machine Nginx qos through Zabbix to aggregate the Lastvalue of Zabbix databases. You can monitor the traffic of the entire CDN, qos data, etc., so that the delay in finding problems is about 5 minutes (cdn is not very sensitive to qos). With rsync+hadoop+hive to calculate the nginx log, you can also get a more detailed analysis of various dimensions (offline data analysis). The analysis script used in the Nginx log is posted below:

Post the zabbix aggregation script first:

#! / usr/bin/python # to get webcdn totaol statistics #-*-coding: utf8-*-import MySQLdb import sys import os def get_total_value (sql): db = MySQLdb.connect (host='xxxx',user='xxxx',passwd='xxxx' Db='xxxx') cursor = db.cursor () cursor.execute (sql) try: result = cursor.fetchone () [0] except: result = 0 cursor.close () db.close () return result if _ name__ = ='_ main__': sql ='if sys.argv [1] = "network_traffic": sql = "select round (sum (lastvalue) / (1024 / 1024)) 4) from hosts a, items b where key_ in ('net.if.out [eth2,bytes]', 'net.if.out [eth0,bytes]') and lower (host) like'%-cdn-cache%' and a.hostid = b.hostid "elif sys.argv [1] = =" nginx_traffic ": sql =" select sum (lastvalue) from hosts a Items b where key_ = 'log_webcdn_ getvalue [traffic]' and lower (host) like'% cdn-cache%' and a.hostid = b.hostid "elif sys.argv [1] =" 2xxand3xx ": sql =" select sum (traffic) from hosts a, items b where key_ in ('log_webcdn_getvalue [2000s]' 'log_webcdn_getvalue [300]') and lower (host) like'%-cdn-cache%' and a.hostid = b.hostid "elif sys.argv [1] =" 4xxand5xx ": sql =" select sum (lastvalue) from hosts a, items b where key_ in ('log_webcdn_ getvalue [four]' 'log_webcdn_ getvalue') and lower (host) like'%-cdn-cache%' and a.hostid = b.hostid "elif sys.argv [1] = =" network_ss ": sql =" select sum (lastvalue) from hosts a Items b where key_ = 'network_conn' and lower (host) like'%-cdn-cache%' and a.hostid = b.hostid "else: sys.exit (0) # print sql value = get_total_value (sql) print value

Then there is a single analysis script:

#! / usr/bin/python # coding=utf-8 from _ future__ import division import subprocess, signal,string import codecs import re import os import time, datetime import sys def show_usage (): print "" python nginx_log_wedcdn.py result_key result_key could be: average_bodysize, response_time, sum_count, count_success, four, 403,404,499, five, 500,502,503,200,300, requests_second response_time_source Percentage_time_1, percentage_time_3, all "" def runCmd (command, timeout = 10): start = datetime.datetime.now () process = subprocess.Popen (command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) while process.poll () is None: time.sleep (0.2) now = datetime.datetime.now (now-start). Seconds > timeout: os.kill (process.pid) Signal.SIGKILL) os.waitpid (- 1, os.WNOHANG) return None return process.stdout.readlines () def get_old_filename (): t = datetime.datetime.now () + datetime.timedelta (minutes =-5) a = t.strftime ('% Ymuri% mmurf% dmurf% H') b = t.strftime ('% M') b = int (b) / / 50005 if b

< 10: c = "0" + str(b) else: c = str(b) d = "/log/nginx/old/" + a + "-%s.log.gz" % c #print d return d def get_new_filename(): t = datetime.datetime.now() + datetime.timedelta(minutes = -5) a = t.strftime('%Y-%m-%d-%H') b = t.strftime('%M') b = int(b)//5*5 if b < 10: c = "0" + str(b) else: c = str(b) d = "/log/nginx/old/" + a + "-%s.log" % c #print d return d def get_new2_filename(): t = datetime.datetime.now() + datetime.timedelta(minutes = -5) a = t.strftime('%Y-%m-%d-%H') b = t.strftime('%M') b = int(b)//5*5 if b < 10: c = "0" + str(b) else: c = str(b) d = "/log/nginx/new/" + a + "-%s.log" % c #print d return d def average_flow(): flow = 0 flow1 = 0 flow_ppsucai = 0 flow_asimgs = 0 flow_static9 = 0 traffic = 0.0 traffic1 = 0.0 count = 0 count_sucai = 0 count_sucai_100 = 0 count_sucai_30_100 = 0 count_sucai_30 = 0 count_asimgs = 0 count_asimgs_100 = 0 count_asimgs_30_100 = 0 count_asimgs_30 = 0 count_static9 = 0 count_static9_100 = 0 count_static9_30_100 = 0 count_static9_30 = 0 sum_time = 0.0 sum_ppsucai_time = 0.0 sum_asimgs_time = 0.0 sum_static9_time = 0.0 sum_time_source = 0.0 count_200 = 0 count_300 = 0 count_success = 0 count_200_backup = 0 count_not_200_backup = 0 id_list_200 = [200,206] id_list_300 = [300,301,302,303,304,305,306,307] id_list_success = [200,206,300,301,302,303,304,305,306,307] data_byte = 0 elapsed = 0.0 response_time = 0.0 response_time_source = 0.0 requests_second = 0.0 requests_second_sucai = 0.0 requests_second_asimgs = 0.0 list_time_1 = [] list_time_3 = [] list_ip_403 = [] list_ip_404 = [] list_ip_415 = [] list_ip_499 = [] list_ip_500 = [] list_ip_502 = [] list_ip_503 = [] server_list = ['"127.0.0.1:8080"','"127.0.0.1:8081"','"-"'] file_name = get_old_filename() if os.path.isfile("%s" % file_name): Writelog(file_name) i = os.popen("/bin/zcat %s" % file_name).readlines() #i = gzip.GzipFile("%s" % file_name).readlines() else: file_name = get_new_filename() if os.path.isfile("%s" % file_name): Writelog(file_name) i = os.popen("/bin/cat %s" % file_name).readlines() else: #time.sleep(15) file_name = get_new2_filename() if os.path.isfile("%s" % file_name): Writelog(file_name) i = os.popen("/bin/cat %s" % file_name).readlines() else: os.popen("rm -f /tmp/exist.txt") sys.exit(1) for line in i: count += 1 try: domain_name = line.split()[1] except: pass try: web_code = int(line.split()[8]) except: web_code = 888 try: IP = str(line.split()[0]) except: pass try: data_byte = int(line.split()[9]) #print "data", data_byte except: data_byte = 0.0001 try: elapsed = float(line.split()[-1].strip('"')) if elapsed == 0.000: elapsed = 0.0001 except: elapsed = 0.0001 try: time_source = float(line.split()[-4].strip('"')) except: time_source = 0.0 try: backup_server = str(line.split()[-3]) except: pass flow1 += data_byte if web_code in id_list_success: flow += data_byte sum_time_source += time_source if domain_name != "ppsucai.pptv.com": sum_time += elapsed else: #print domain_name sum_time += 0.000 if web_code in id_list_200: #print web_code count_200 += 1 if backup_server not in server_list: #print web_code, backup_server count_200_backup += 1 elif web_code == 200 and date_byte == 0: #print line.split()[3].lstrip("[") WriteURLInfo(line.split()[3].lstrip("[")) WriteURLInfo("\t") WriteURLInfo(line.split()[10]) WriteURLInfo("\n") elif web_code in id_list_300: count_300 += 1 elif web_code == 403 and IP not in list_ip_403: list_ip_403.append(IP) #print "this is the sum 403 count:", IP, len(list_ip_403) elif web_code == 404 and IP not in list_ip_404: list_ip_404.append(IP) #print "this is the sum 404 count:", IP, len(list_ip_404) elif web_code == 415 and IP not in list_ip_415: list_ip_415.append(IP) #print "this is the sum 415 count:", IP, len(list_ip_415) elif web_code == 499 and IP not in list_ip_499: list_ip_499.append(IP) #print "this is the sum 499 count:", IP, len(list_ip_499) elif web_code == 500 and IP not in list_ip_500: list_ip_500.append(IP) #print "this is the sum 500 count:", IP, len(list_ip_500) elif web_code == 502 and IP not in list_ip_502: list_ip_502.append(IP) #print "this is the sum 502 count:", IP, len(list_ip_502) elif web_code == 503 and IP not in list_ip_503: list_ip_503.append(IP) #print "this is the sum 503 count:", IP, len(list_ip_503) if web_code not in id_list_200 and backup_server not in server_list: #print web_code, backup_server count_not_200_backup += 1 if elapsed >

1.0 and web_code in id_list_success and IP not in list_time_1: list_time_1.append (IP) elif elapsed > 3.0 and web_code in id_list_success and IP not in list_time_3: list_time_3.append (IP) If domain_name = "ppsucai.pptv.com" and web_code in id_list_success: download_speed_sucai = round (data_byte / elapsed / 1024 2) flow_ppsucai + = data_byte sum_ppsucai_time + = elapsed count_sucai + = 1 if download_speed_sucai > = 100: count_sucai_100 + = 1 elif download_speed_sucai

< 100 and download_speed_sucai >

= 30: count_sucai_30_100 + = 1 else: count_sucai_30 + = 1 elif domain_name = = "asimgs.pplive.cn" and web_code in id_list_success: download_speed_asimgs = round (data_byte / elapsed / 1024 2) flow_asimgs + = data_byte sum_asimgs_time + = elapsed count_asimgs + = 1 if download_speed_asimgs > = 100: count_asimgs_100 + = 1 elif download_speed_asimgs

< 100 and download_speed_asimgs >

= 30: count_asimgs_30_100 + = 1 else: count_asimgs_30 + = 1 elif domain_name = = "static9.pplive.cn" and web_code in id_list_success: download_speed_static9 = round (data_byte / elapsed / 1024 2) flow_static9 + = data_byte sum_static9_time + = elapsed count_static9 + = 1 if download_speed_static9 > = 100: count_static9_100 + = 1 elif download_speed_static9

< 100 and download_speed_static9 >

< 2: show_usage() os.popen("rm -f /tmp/exist.txt") sys.exit(1) else: if os.path.isfile("/tmp/exist.txt"): sys.exit(1) else: os.popen("echo 'hello' >

/ tmp/exist.txt ") result_key = sys.argv [1] status = result_dic () os.popen (" > / tmp/webcdnqos_result.txt ") print status [result _ key] Writelog (str (status [result _ key]) for i in status.keys (): WriteTmpInfo (str (I) + "=" + str (status [I]) os.popen ("rm-f / tmp/exist.txt") finish reading the above Do you have any further understanding of what Nginx Httpcode's analysis script is? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report