In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >
Share
Shulou(Shulou.com)06/01 Report--
ELK analyzes ngx_lua_waf Software Firewall Log
For introduction and deployment of ngx_lua_waf, please refer to
Https://github.com/loveshell/ngx_lua_waf
This is a lua-nginx-module-based web application firewall, written by Zhang Huiyuan (ID: kindle), Weibo: @ Magic Wizard.
Purpose:
Prevent sql injection, local inclusion, partial overflow, fuzzing testing, xss, × × F and other web***
Prevent files such as svn/ backups from leaking
To prevent stress testing tools such as ApacheBench
Screen common scanning tools, scanners
Block abnormal network requests
Block the php execution permission of the picture attachment directory
Prevent webshell upload
Summary:
1This is generally powerful, which is a little simpler than other software firewall Modsecurity.
(2) the bug of Magazine NgxfantluaSecretwaf is mainly caused by the lack of rigorous writing of firewall policies, which will result in two results: one is that some * * bypass the firewall through camouflage, and the other is that improper configuration of firewall policies will cause manslaughter.
3. In addition, different policies are configured according to the type of site, and the default configuration takes effect globally. For example, forums and other special allow a lot of html insertion, such a strategy needs to be more relaxed.
4. The generated hack log can be analyzed by ELK. ELK needs to make a special template according to the log format. This template is compatible with most log types, and a small number of log analysis is not regular at all.
5. Finally, ELK can show the classification of log analysis results, but it is not possible to distinguish which of the various ruletag types * * belongs to.
6. Finally, it is recommended that if you really want to use ngx_lua_waf, you can consider trying it out on some origin server sites. Front-end sites are not recommended to use it until you have a deep understanding of the software before you use it online.
7, add: at present, ngx_lua_waf is familiar with three major categories: denying specific user agent, denying access to specific suffix files, and preventing sql injection.
Follow-up plan:
Someone (the project address of monitor Zhao, https://github.com/unixhot/waf) has reconfigured ngx_lua_waf twice. The main functions are: blacklist and whitelist, log only, log format may be more friendly, access is not restricted (log format may be more friendly to be observed), and it is planned to use the waf of secondary reconfiguration to test again later.
The general steps of the plan are as follows:
1. Do not deploy online at once. After deployment, only log is recorded, and then the rules are observed and adjusted to ensure that normal requests will not be accidentally defended.
2. Use SaltStack to manage the update of the rule base.
3. Using ELKStack for log collection and analysis, it is very convenient to make a beautiful statistical pie chart on Kibana. (it can also be achieved at present)
The following are the points that should be paid attention to in the process of operation.
First, ELK added a new strategy for nginx_lua_waf log segmentation
1, log format
Line = realIp.. "[".. time.. "]\" .method. ".servername.url."\ ". Data.". UA. "\". Ruletag. "\"\ n "
2. Log content
192.168.200.106 [2016-07-26 16:56:17] "UA i.maicar.com/AuthenService/Frame/login.aspx?ra=0.5440098259132355"-"Baidu-YunGuanCe-ScanBot (ce.baidu.com)" (HTTrack | harvest | audit | dirbuster | pangolin | nmap | sqln |-scan | hydra | libwww | BBBike | sqlmap | owasp | Nikto | fimap | havij | PycURL | zmeu | BabyKrokodil | netsparker | httperf | bench | SF/) "
3Progressive logstash segmentation strategy
The most important thing is the writing of grok regular expressions. Here are two suggestions:
1) one is the / opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.5/patterns/ that comes with the reference system
In this directory, there are many grok regular segmentation syntax in log format included with the system, such as:
Aws bro firewalls haproxy junos mcollective mongodb postgresql redis bacula exim grok-patterns java linux-syslog mcollective-patterns nagios rails ruby
There is no regular reference grok-patterns by default.
2) second, there are two grok grammar verification sites on the Internet (both have to be * *).
Http://grokdebug.herokuapp.com/
Http://grokconstructor.appspot.com/do/match#result
You can also verify it manually, for example
[root@elk logstash] # / opt/logstash/bin/logstash-f / etc/logstash/test1.conf Settings: Default pipeline workers: 4Pipeline main started192.168.200.106 [2016-07-26 16:56:17] "UA i.maicar.com/AuthenService/Frame/login.aspx?ra=0.5440098259132355"-"Baidu-YunGuanCe-ScanBot (ce.baidu.com)" (HTTrack | harvest | audit | dirbuster | pangolin | sqln |-scan | Parser | libwww | BBBike | sqlmap | w3af | owasp | Nikto | fimap havij | PycURL | zmeu | BabyKrokodil | netsparker | httperf | bench | SF/) "{" message "= >" 192.168.200.106 [2016-07-26 16:56:17]\ "UA i.maicar.com/AuthenService/Frame/login.aspx?ra=0.5440098259132355\"\ "Baidu-YunGuanCe-ScanBot (ce.baidu.com)\"\ "(HTTrack | harvest | audit | dirbuster | pangolin | nmap | sqln |-scan | hydra | libwww | BBBike | sqlmap | w3af | owasp | Nikto | havij | PycURL | zmeu | BabyKrokodil | netsparker | httperf | bench | SF/)\" "@ version" = > "1", "@ timestamp" = > "2016-07-28T10:05:43.763Z", "host" = > "0.0.0.0", "realip" = > "192.168.200.106", "time" = > "2016-07-26 16:56:17", "method" = > "UA", "servername" = > "i.maicar.com" "url" = > "/ AuthenService/Frame/login.aspx?ra=0.5440098259132355", "data" = > "-", "useragent" = > "Baidu-YunGuanCe-ScanBot (ce.baidu.com)", "ruletag" = > "(HTTrack | harvest | audit | dirbuster | pangolin | nmap | sqln |-scan | Parser | libwww | BBBike | w3af | owasp | Nikto | fimap | havij | zmeu | BabyKrokodil | netsparker | httperf | bench | SF/)"}
The test1.conf content is as follows:
Input {stdin {} filter {grok {match = > {"message" = > "% {IPV4:realip}\ s +\ [% {TIMESTAMP_ISO8601:time}\]\ s +\"% {WORD:method}\ s}% {HOSTNAME:servername} (? [^\ "] +)\"\ s +\ "(? [^\"] +)\ "\ s +\" (? [^\ "] +)\"\ s +\ " "(? [^\"] +)\ "}} output {stdout {codec= > rubydebug}}
Finally, the online segmentation strategy is as follows:
Filter {if [type] = = "waf194" {grok {match = > ["message" "% {IPV4:realip}\ s +\ [% {TIMESTAMP_ISO8601:time}\]\ s +\"% {WORD:method}\ s}% {HOSTNAME:servername} (? [^\ "] +)\"\ s +\ "(? [^\"] +)\ "\ s +\" (? [^\ "] +)\"\ s +\ "(? [^\"] +)\ "] remove_field = > [" message "]}
Second, ELK added an index template for nginx_lua_waf logs
The most critical problem is the coexistence of multiple index templates. In the past, the post-index template will always overwrite the original. Refer to http://mojijs.com/2015/08/204352/index.html to set tempalte_name to distinguish different index templates.
The output template is configured as follows:
Output {elasticsearch {hosts = > ["192.168.88.187VR 9200", "192.168.88.188purl 9200" "192.168.88.189 template_overwrite 9200"] sniffing = > false manage_template = > true template = > "/ opt/logstash/templates/logstashwaf.json" template_overwrite = > true template_name = > "logstashwaf.json" index = > "logstash-% {type} -% {+ YYYY.MM.dd}" document_type = > "% {[@ metadata] [type]}" flush_size = > 10000}}
The specific logstashwaf.json configuration is as follows. With direct reference to the official elasticsearch-logstash.json, the template name is modified. In the configuration, no word segmentation is done as long as the string type is configured, because no word segmentation can save memory when doing a view. On the other hand, long stings such as url,agent,ruletag can only be accurately matched without word segmentation, and that kind of fuzzy matching of word segmentation is not needed. In addition, as long as some string fields are added without word segmentation, you can also modify "match": "*" to specific fields, such as:
"match_pattern": "regex", # add "match": "(realip) | (mothod) (servername) | (url) | (data) | (useragent) | (ruletag)"
The overall online configuration is as follows:
{"template": "logstash-waf194*", "settings": {"index.refresh_interval": "5s"}, "mappings": {"_ default_": {"_ all": {"enabled": true, "omit_norms": true} "dynamic_templates": [{"message_field": {"match": "message", "match_mapping_type": "string", "mapping": {"type": "string", "index": "analyzed" "omit_norms": true, {"string_fields": {"match": "*", "match_mapping_type": "string", "mapping": {"type": "string", "index": "analyzed", "omit_norms": true "fields": {"raw": {"type": "string", "index": "not_analyzed", "ignore_above": 256}], "properties": {"@ version": {"type": "string" "index": "not_analyzed"}, "geoip": {"type": "object", "dynamic": true, "properties": {"location": {"type": "geo_point"}
Third, the hack log generated by Ngx_lua_waf is shown on ELK as follows
Making views, making templates and so on are omitted.
The results of analyzing a small number of logs are as follows:
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.