In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >
Share
Shulou(Shulou.com)05/31 Report--
How to use elk to build password top statistics library, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain in detail for you, people with this need can come to learn, I hope you can gain something.
Elk itself is a very powerful log processing system, which is composed of elasticsearch, logstash and kibana, and has the functions of database, data processing and front-end display. Use these to build a system for password topN statistics. Of course, strong processing performance is needed to complete this kind of statistics.
Build the basic environment
Operating system: ubuntu 20.4 64-bit
Memory: 16GB
Hard disk: 2T data disk, 128g ssd system disk
ElasticSearch:7.10.1
Kibana:7.10.1
Logstash:7.10.1
1 、 elasticsearch
Extract the file, tar-zxvf elasticsearch*.tar.gz, and change the directory to elasticsearch. After that, all the settings about elasticsearch basically occur in this directory.
Modify the configuration file, conf/elasticsearch.yml
It is recommended to modify the configuration as follows
-path-modify according to the actual situation # data storage path # path.data: / path/to/data # log file path # path.logs: / path/to/logs-memory-# whether the memory is locked during startup: bootstrap.memory_lock: true please guarantee The setting of the certificate `SIZE` environment variable is about half of the available memory of the system-network-# bind IP address It is suggested that the setting up of a single machine should be changed to 127.0.0.1, network.host:127.0.0.1 http.port:9200
Start elasticsearch
Start directly by command. / bin/elasticsearch and run in the form of foreground. Check to see if the boot is successful by using the command curl 127.0.0.1virtual 9200.
However, an error can be reported during use, that is, there is not enough memory. At this point, you need to modify the size of the jvm. It is recommended that this size be changed to half the actual memory. Compared to the actual memory of my computer is 16g, I use 8g here.
File location:. / config/jvm.options
-Xms8g
-Xmx8g
Then use the command nohup. / bin/elasticsearch & later to run ES
2 、 Kibana
Download kibana
Extract the file, tar-zxvf kibana*.tar.gz, and change the directory to kibana. After that, all the settings about kibana basically occur in this directory.
Kibana profile location. / config/kibana.yml
# listening port server.port: 5601 # default configuration # IP configuration server.host: 0.0.0.0 # it is recommended to change to all network cards # elasticsearch address elasticsearch.hosts: ["http://localhost:9200"] # according to the actual configuration, the above configuration is localhost, that is, 127.0.0.1
Then use the command nohup. / bin/kibana & run kabana in the background and access kibana in the browser through host:5601
3 、 logstash
In fact, the easier way to add data is to use logstash for data import, this way can be based on their own actual situation, the preparation of data format, customized high, but there is a certain degree of difficulty. In fact, logstash is also the most important part of the whole section, that is, data import. Through logstash, various types of data formats can be imported into ES for storage.
3.1 basic knowledge
Download logstash
Extract the file, tar-zxvf logstash*.tar.gz, and change the directory to logstash. After that, all the settings about logstash basically occur in this directory.
Before we start to import data, let's understand that logstash is actually a tool for collecting logs and formatting them, including plug-ins such as input, filter, and output.
Input can accept logs collected from beat (a lightweight client in elk with a variety of beat that interested friends can learn about themselves), log files, syslog, and so on. You can refer to the official manual https://www.elastic.co/guide/en/logstash/current/input-plugins.html for details. We are using the file plug-in here.
Filter can use grok, json, xml and other ways to format data, according to the actual situation to choose a certain way, you can refer to the official manual https://www.elastic.co/guide/en/logstash/current/filter-plugins.html. What we use here is mainly grok, and we can write different regular expressions according to the file situation to deal with the file.
Output is actually the output of the results, and also supports a variety of plug-ins such as syslog, csv, file, etc. For more information, please refer to the official manual https://www.elastic.co/guide/en/logstash/current/output-plugins.html, the elasticsearch we use here, to output the results to es.
3.2 simple configuration
Sample file weakpass.txt
Admin----123456
Admin----admin
Admin----1
Admin----12345
Test----123
Test----test
Test----1234
……
There is a sample file named lostash-sample.conf in the config directory
# Sample Logstash configuration for creating a simple# Beats-> Logstash-> Elasticsearch pipeline.# data source is input configuration input {# here is beats plug-in beats {port = > 5044}} # data output uses elasticsearch plug-in output {elasticsearch {hosts = > ["http://localhost:9200"] index = >"% {[@ metadata] [beat]} -% {[@ metadata] [version]} -% {+ YYYY.MM.dd} " # user = > "elastic" # password = > "changeme"}}
Let's configure a weakpass.conf file based on the example file above
Input {file {# sample file to import path = > "path/weakpass.txt" # start location start_position = > "beginning"}} filter {grok {match = > {# format data "message" = > "(?. *?)-(. *)"}} output {# use debug to output the result Go to the screen stdout {codec = > rubydebug}}
Use the command. / bin/logstash-f config/weakpass.conf to import data using the configuration file we wrote.
If there is no data output after running the command, it is recommended to delete all files in. / data and be sure to look at the directory.
3.3 garbage data culling
In order to reduce the redundancy of data and the size of hard disk space, we have to delete some useless fields, such as path, message, host and so on. We add configuration to gork
Remove_field= > ["path", "message", "host"]
Through this configuration, our amount of data is reduced, in fact, the timestamp can also be deleted, and there is no meaning to exist, each record has a timestamp, which really takes up hard disk space.
3.4 data import into ES
Because we are testing, all the files used have always been a problem with weakpass.txt,logstash, and the data that has been processed once will not be processed repeatedly (the description is not necessarily correct). It is recommended to clear the data directory. Then do the following.
Modify the configuration file as follows
Input {file {# sample file to import path = > "/ media/k2/5fcda6c4-e009-41dd-a314-c54c3c55126b/elk/weakpass1.txt" start_position = > "beginning"}} filter {grok {match = > {# format data "message" = > "(?. *?)-(. *)"} remove_field= > ["path" "message", "host", "@ timestamp"] # set tags When we have a large amount of data, we can distinguish data add_tag = > "weakpass"}} output {# use debug to output the results to the screen # stdout {codec = > rubydebug} elasticsearch {action = > "index" index = > "weakpass" # Index name hosts = > ["127s. 0.0.1 9200 "] # ES address}} 3.5 use kinaba
Then we can see our index in the index management in kibana.
This is so that we can create the index pattern based on it.
Once created, you can retrieve the data in discover.
For example, if we enter 1, we can retrieve all the data related to 1.
We can also retrieve the data with the user name admin, which will lead to the user whose user name is admin
Statistical password ranking
You can count password ranking by using dashboard of kibana
Create a Data Table, select the source we created above, and then configure it as follows
So we get the ranking above. Because there is less data, so the statistical speed is fast, here is a demonstration, there is no more data import.
Optimize storage
Because logstash adds some useless fields to the imported data, which appear in every record, you can delete them to reduce the storage space of the server.
Path original path
Message complete record
Host Hostnam
@ timestamp timestamp
Input {} filter {grok {match = > {# format data "message" = > ". *"} remove_field= > ["path", "message", "host" "@ timestamp"] # set label add_tag = > "weakpass"}} output {} optimize index
Since we have to import different password files, we need to prepare for the following retrieval. Here, we need to write different types of indexes in the output part according to the imported content, so as to facilitate later retrieval.
Output {# use debug to output the result to the screen # stdout {codec = > rubydebug} elasticsearch {action = > "index" index = > "weakpass-mail-111" # index name hosts = > ["127.0.0.1 stdout 9200"]}}
Index = > index name. If the password we want to import includes different types, please be smart here and write down different types.
Such as weakpass-mail-1, weakpass-q-1, weakpass-b-1, so that we can create styles such as weakpass* when we use kibana to create index styles.
You can then select the corresponding pattern in discover to retrieve a certain type of data.
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.