In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >
Share
Shulou(Shulou.com)05/31 Report--
This article is about how to use automated vulnerability analysis tools in network security. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
0x01. Introduction to the overall introduction
PentestEr_Fully-automatic-scanner in order to save the tedious manual testing and common loopholes search work, to improve the efficiency of the work, the tool for the early collection of a large number of tools on the market, the principle of not missing scan, maximize the availability of tools, scalability and other requirements, the development of sub-scanners. Usage
You can execute python main.py-d cert.org.cn directly
Mind map
Directory structure |-- dict |-- dns_server.txt |-- lib |-- _ init__.py |-- cmdline.py |-- listen |-- filer.py |-- report |-- result |-- rules |-- wahtweb.json |-- commom.txt |-- subbrute |-- thirdlib |-- utils |-- api.keys |-- BBScan.py |-- bingAPI |-- captcha.py |-- config .py |-- dnsbrute.py |-- gxfr.py |-- import1.php |-- main.py |-- report_all.py |-- subDomainBrute.py |-- sublist3r.py |-- upload.py |-- wahtweb.py |-- wydomain.py |-- launch the program .bat |-- wvs.bat
This directory structure makes me feel very confused, especially behind a large number of py files, lack of software design ideas, feel impromptu code, many files have errors, few comments, often need debug to know the function of this code.
0x02. Information collection 1. Domain name information collection
Before scanning, it is customary to query the domain name information of the target website by whois. The implementation of the script whois is obtained through the query of the third-party website. However, because the original query function has a long date and the website code has been updated, this function can no longer accurately obtain the domain name information of the target website.
Def sub_domain_whois (url_domain): "" get the whois result through the query of the third-party website, and then make a regular match to the web page. To get the whois results in the web content "" um= [] a1=requests.get ("http://whois.chinaz.com/%s"% (url_domain)) if 'Registration' not in a1.text: print' whois error' else: print a1.text # use regular matching to get the content you want, if the front-end code of the target site changes Then the rule fails out_result=re.findall (r'([\ s\ S] *)', a1.text.encode ("GBK", 'ignore')) out_result_register=re.findall (r' http://(.*?)"', a1.text.encode ("GBK"). 'ignore')) for x in out_result_register: if' reverse_registrant/?query=' in x: um.append (x) break for x in out_result_register: if 'reverse_mail/?query=' in x: um.append (x) break print um [0] Um [1] print out_result [0] # stores the obtained results in a html file In order to finally generate the report with open ('report/whois_email_user.html','w') as fwrite: fwrite.write (' register_user:') fwrite.write ('registrant anti-query') fwrite.write ('
') fwrite.write (' email:') fwrite.write ('mailbox anti-query') fwrite.write ('
') fwrite.write (') fwrite.write (out_result [0]) fwrite.write ('')
Def sub_domain_whois (url_domain): import json a = requests.get ("http://api.whoapi.com/?domain={}&r=whois&apikey=demokey".format(url_domain)) result = a.text r = json.loads (result) for kdirection v in r.items (): print (KMagnev)
Of course, if you need some detailed information, you may still need to crawl the content of some websites.
two。 Subdomain name collection
For subdomain name collection, when the system is implemented, in order to collect as much code as possible, a lot of third-party scripts are used. There is a problem here. This method of using scripts makes the code poor readability. And it is difficult to maintain, and a lot of code is no longer applicable.
Gxfr.py uses advanced search engine (bing,baidu,google) queries to enumerate subdomains and perform dns lookups. This program uses bing's API to collect subdomains python gxfr.py-- bxfr-- dns-lookup-- o-domain url_domain programs to save the results to a file like domain_gxfr1.txt Api can no longer use the commonly used subdomain name string dictionary provided by subDomainsBrute.py, and then combine it with the domain name Cooperate with the DNS server to determine whether there is a combined subdomain name python subDomainsBrute.py domain will enumerate the dictionary and store the domain name in the domain_jiejie.txt file after successful resolution. Wydomain.py obtains the target subdomain name by using the third-party interface on the Internet or crawling the query results. The results obtained by python wydomain domain through different websites will exist in different local .json files where sublist3r.py uses Baidu and Yahoo. Third-party engines such as bing collect subdomain names of target domain names, and also provide the function of dictionary enumeration and port scanning. Python sublist3r-d doamin-o domain_sublistdir.txt will store the obtained subdomain name results in the local txt file gxfr.py file.
The py file uses bing's API, and Google's search engine queries the subdomains of the target domain name. The two main functions are the bxfr function and the gxfr function.
Bxfr function, which uses Bing's API for sub-domain name resolution and query. This function needs to provide API Key with Bing-related features. Then visit `https://api.datamarket.azure.com/Data.ashx/Bing/Search/Web?Query=domain&Misplaced & format=json and test that the API API is no longer available. After obtaining the sub-domain name result through the API, use the lookup_subs function to socket the address and successfully (socket.getaddrinfo (site, 80) `), store the result in the txt file.
The gxfr function, which uses the hack syntax of the google search engine to query (site:baidu.com), and then matches pattern ='> ([\.\ w -] *)\.% s. Self.LINKS_LIMIT: return False: self.urls_in_queue.append (url)
Through this method, the vulnerability detection rules are mapped to the URL, and then a tuple is formed, and the tuple is passed into a queue for scanning for vulnerabilities. The code is as follows
_ self.url_dict: full_url url.rstrip () _ [0] url_description {: url.rstrip (),: full_url} item (url_description, _ [1], _ [2], _ [3], _ [4], _ [5]) self.url_queue.put (item)
Crawl_index method: this method uses beautifulSoup4 to crawl url links in a page. Part of the code is as follows
Soup BeautifulSoup (html_doc,) links soup.find_all () l links: url l.get (,) url, depth self._cal_depth (url) depth
< self.max_depth: self._enqueue(url) _update_severity方法:该方法用于更新serverity,如果规则中存在serverty字段,那么就将默认的final_serverity进行修改 severity >Self.final_severity: self.final_severity severity
_ scan_worker method: the key code used to perform vulnerability scanning is as follows
Try: item self.url_queue.get.0) except: returntry: url_description, severity, tag, code, content_type, content_type_no item url url_description [] prefix url_description [] except Exception, e: logging.error (% e) continue not item or not url: break. Status, headers, html_doc self._http_request (url) (status [200,]) and (self.has_404 or statusroomself.diagnostic status): code and status! Code: continue not tag or html_doc.find (tag) >: content_type and headers.get (,) .find (content_type)
< or \ content_type_no and headers.get(, ).find(content_type_no) >: continue self.lock.acquire () not prefix self.results: self.results [prefix] [] self.results [prefix] .append ({: status,:% (self.schema, self.host, url)}) self._update_severity (severity) self.lock.release ()
Scan method: this is a multithreaded initiator that starts the _ scan_worker method and finally returns the hostname of the test. The test result and severity code are as follows
I range (threads): t threading.Threadself._scan_worker) threads_list.append (t) t.start () t threads_list: t.join () key self.results.keys (): len (self.results [key]) >: del self.results [key] return self.host, self.results, self.final_severity1.nmap scan
The target is scanned by calling nmap directly, and the scan result is stored in nmap_port_services.xml. The command to start is
Nmap banner,http-headers,http-title nmap_port_services.xml2.AWVS scanning
Use the command line to call AWVS to scan the target website and enable a thread in the system to monitor the running of the process, as follows
Not os.path.exists (): time.sleep (20) print print popen subprocess.Popen (, subprocess.PIPE, subprocess.STDOUT) True:next_line popen.stdout.readline () next_line and popen.poll ()! None:breaksys.stdout.write (next_line)
The code for wvs.bat is as follows:
Echo off / p please input wvs path,eg: [d:\ Program Files (x86)\ Web Vulnerability Scanner]:: / f% I (result.txt) (% I running% I. / scan% I / Profile default / SaveFolder d:\ wwwscanresult\% pp%\ / save / Verbose) 0x04. Report generation
First, the XML file generated by nmap is formatted by import1.php script and then re-output. The core code is as follows:
@ file_put_contents (,.., FILE_APPEND); print.; foreach (> port as) {[]; (int) []; > script []; > service [];;.; print; @ file_put_contents (, FILE_APPEND);}
Finally, all the results are integrated through the repoert_all function, and the paths of these files are summarized into left,html. The code is as follows:
Html_doc left_html.substitute ({:,:}) with open (,) as outFile: outFile.write (html_doc) 0x05. Summary
For this system, it can only be said to be able to meet personal use, and the later code maintenance is too difficult to expand. When using third-party scripts, it is directly using command execution to obtain data, rather than by importing modules. However, the idea of the system is worth using for reference, especially when it takes a lot of effort to collect the sub-domain name information in the early stage, and many third-party scripts are called in order to collect the sub-domain names under the target domain name as completely as possible, which is worth learning. As for the vulnerability scanning module, even if a third-party script is used, AWVS is also used to scan the target. The rule design of the scanning storage program BBScan is worth learning and has a high degree of customization. Overall, the design of the scanner is good, but it is a bit hasty to integrate these third-party scripts.
Thank you for reading! This is the end of the article on "how to use automatic vulnerability Analysis tools in Network Security". I hope the above content can be of some help to you, so that you can learn more knowledge. If you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.