In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >
Share
Shulou(Shulou.com)06/01 Report--
Have you ever conducted a network penetration test that is so extensive that you end up with dozens of files, including Nmap scan results, each containing multiple hosts? If the answer is yes, then you should be interested in this blog post.
Here is a piece of work I recently did to find a way to classify penetration test results while enabling concurrent collaboration among team members. We will see that using traditional "defensive" tools for offensive security data analysis has an advantage over traditional grep when parsing and analyzing data.
Finally, all the source code of this project can be downloaded on github, and I hope this will help the old friend who will have a need with me: https://github.com/marco-lancini/docker_offensive_elk.
What you can choose at present
If you are still reading this article, you want to abandon the original grep-based approach, but do we have any alternatives?
I first glanced at something I had been ignoring: the Nmap HTML report. I don't know how many people know and actually use it, but you can get an XML output file from Nmap and pass it to a XML processor (such as xsltproc) to convert it into a HTML file, as shown in the following figure:
If you are interested, you can find the complete process for obtaining this information on the Nmap website. However, I think this approach has some drawbacks. First, unless you start Nmap with the-webxml switch, you must throw each output file to replace the XSL stylesheet reference so that it points to the exact location of the nmap.xsl file on the current machine. Second, and more importantly, there is no expansion.
After giving up the HTML approach, I remembered a blog post by my former colleague Vincent Yiu and used Splunk for offensive operations. This is an interesting idea because we are increasingly seeing people also use so-called "defense" tools to attack. Splunk is definitely not for me (because I don't have a license), but after some research, I finally stumbled upon this blog post: "Nmap + Logstash to Gain Insight Into Your Network".
I've heard of ELK before (there's more on ELK below), but I never really knew it, probably because I classified it as a "defense" tool used primarily by SOC analysts. And what caught my attention is that the above blog post explains how:
Import Nmap scan results directly into Elasticsearch, and then you can use Kibana to visualize them.
Introduction of ELK Stack
So, what is ELK Stack? ELK is the acronym for three open source projects: Elasticsearch,Logstash and Kibana. Elasticsearch is a search and analysis engine. Logstash is a server-side data processing pipeline that simultaneously extracts data from multiple sources, transforms it, and sends it to "storage" such as Elasticsearch. Kibana allows users to visualize data using charts and graphs in Elasticsearch.
I won't explain the different components of this stack in detail, but I highly recommend "The Complete Guide to the ELK Stack" for those who are interested, which gives a very good overview of the stack and its three main components (you can skip the "install ELK" section because we will take a different approach).
What I'm interested in is how Elasticsearch can be used not only for detection (defense), but also for attack.
Installation
The following is a complete demonstration until the final installation is successful. Students who are not interested in this can skip directly to the "manipulating data" section.
First we'll use a great repository done by @ deviantony, which will allow us to start the full ELK stack in seconds, thanks to docker-compose:
After cloning the repository, we can see from the docker-compose.yml file that three services will be started. This is the modified docker-compose.yml file, in which I added the container name (for clarity) and a way for Elasticsearch to store data, even after deleting its container, save the data by installing the volume on the host (. / _ data/elasticsearch:/USR/ share / elasticsearch/ data):
Docker-elk ❯ cat docker-compose.yml
Version:'2'
Services:
#-# ELASTICSEARCH#-- -elasticsearch: container_name: elk_elasticsearch build: elasticsearch/ volumes: -. / elasticsearch/config/elasticsearch.yml: / usr/share/elasticsearch/config/elasticsearch.yml:ro -. / _ data/elasticsearch:/usr/share/elasticsearch/data ports:-"9200 elk_elasticsearch build 9200"-"9300 elk_elasticsearch build 9300" Environment: ES_JAVA_OPTS: "- Xmx256m-Xms256m" networks:-elk#-# LOGSTASH#- -logstash: container_name: elk_logstash build: logstash/ volumes: -. / logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro -. / logstash/pipeline:/usr/share/logstash/pipeline : ro ports:-"5000 Xmx256m 5000" environment: LS_JAVA_OPTS: "- Xmx256m-Xms256m" networks:-elk depends_on:-elasticsearch#-- -# KIBANA#-kibana: container_name: elk_kibana build: kibana/ volumes: -. / kibana/config/:/usr/share/kibana/ Config:ro ports:-"5601 elk depends_on 5601" networks:-elk depends_on:-elasticsearchnetworks: elk: driver: bridge
By default, the stack opens the following ports:
5000:Logstash TCP input
9200:Elasticsearch HTTP
9300:Elasticsearch TCP transmission
5601:Kibana
Take a few seconds to install Kibana, and then visit it on the web page: http://localhost:5601.
Prepare Elasticsearch to get Nmap results
This was a challenge for a complete novice to ELK until I found the following post: "How to Index NMAP Port Scan Results into Elasticsearch"
". This is not a complete solution, but a good starting point. Let's start there and build on it.
First, we need the Logstash Nmap codec plug-in. The Logstash codec simply provides a way to specify how the raw data should be decoded, regardless of the source. This means that we can use the Nmap codec to read Nmap XML from various inputs. Before passing the data to the Nmap codec, we can read it from the message queue or through syslog. Fortunately, adding it is as simple as modifying Logstash Dockerfile at logstash / Dockerfile:
Docker-elk ❯ cat logstash/Dockerfile
# https://github.com/elastic/logstash-docker FROM docker.elastic.co/logstash/logstash-oss:6.3.0 # Add your logstash plugins setup here # Example: RUN logstash-plugin install logstash-filter-json RUN logstash-plugin install logstash-codec-nmap
Next, to put it into Elasticsearch, we need to create a mapping. Mapping templates are available from the Github repository of the Logstash Nmap codec. We can download it and put it in logstash / pipeline / elasticsearch_nmap_template.json:
Finally, we need to modify the logstash configuration file at logstash / pipeline / logstash.conf to add filter and output options for the new Nmap plug-in:
Prepare the taker service.
We will use the modified version of VulntoES to get the results and import them into Elasticsearch. To do this, I created a new folder extractor for new services that actually ingest data.
In the listing above, the folder grabber contains:
VulntoES, a modified version of the original script, fixed some parsing errors
The script extraction will run VulntoES.py for each XML file placed in the / data folder of the container (see below for more)
Dockerfile imports the modified VulntoES into the python:2.7-stretch image
We just need to add this new container to the docker-compose.yml file now:
Notice how we map the local folder. / _ data/nmap in the container under the path / data/. We will use this "shared" folder to pass Nmap results.
After all these changes, this is what your project folder looks like:
When you are finished, be sure to use the docker-compose build command to rebuild the image.
Create an index
The final step is to create an index to index the data to:
1. Use curl to create nmap-vuln-to-es index:
Curl-XPUT 'localhost:9200/nmap-vuln-to-es'
2. Open Kibana (http:// localhost:5601) in your browser and you will see the following screen:
3. Insert nmap * as the index mode, and then press "next":
Select "I don't want to use the Time Filter" and click "Create Index Pattern":
If all goes well, you should see a page that lists each field in the nmap * index and the relevant core type of the field for the Elasticsearch record.
Operation data
After the Elk is configured correctly, we can use it to play with the data.
Get Nmap result
In order to be able to get our Nmap scans, we must output the results as a report in XML format (- oX), which can be parsed by Elasticsearch. When the scan is complete, place the report in the. / _ data/nmap/ folder and run the extractor:
Analysis data
Now that we've imported some data, it's time to take a look at the functionality of Kibana.
The "dicover" view displays all the data in the index as a document table and allows you to browse the data interactively: we can access each document in each index that matches the selected index pattern. You can submit search queries, filter search results, and view document data. You can also view the number of documents that match the search query and get field value statistics. It is good to classify targets by filtering (for example, by opening ports or services).
Instead, the Dashboard view shows a collection of visualization and searches. You can type, resize and edit the dashboard contents, and then save the dashboard for sharing. This can be used to create a highly customized data overview.
The dashboard itself is interactive: you can apply a filter to view the visualization of real-time updates to reflect the contents of the query (in the following example, I filter by port 22).
For this interested student, I exported my sample dashboard to a json file that is easy to re-import:
Https://raw.githubusercontent.com/marco-lancini/docker_offensive_elk/master/kibana/dashboard.json
Conclusion
Traditional "defensive" tools can be effectively used for aggressive security data analysis to help your team collaborate and classify scan results.
In particular, Elasticsearch provides the opportunity to aggregate the number of different data sources and query using a unified interface in order to extract actionable knowledge from a large amount of unclassified data.
This article is reproduced from the translated article of "Security guest", the original source: marcolancini.it, the original editor: edge
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.