In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces how to use Elasticsearch and TheHive to build an open source security emergency response platform, which has a certain reference value. Interested friends can refer to it. I hope you can learn a lot after reading this article.
Overview
Through open source software, we can build a security emergency response platform, which can carry out log integration, alarm generation, IoC enrichment and event management.
In the above flowchart, Wazuh, as HIDS, sends the data back to Wazuh Manager and Elasticsearch. ElastAlert observes new events and generates alerts in TheHive accordingly. The event is then enriched through Cortex and MISP queries for additional information, which is then automatically closed or submitted to the analyst.
Note that any endpoint services in the system or Agent that can generate logs for delivery to Elasticsearch can be replaced. The advantage of this system is that most of the components can be replaced.
Wazuh
Wazuh is an open source security monitoring solution for collecting and analyzing host security data. Wazuh is a branch of the OSSEC project. Wazuh components are highly integrated with Elasticsearch and Kibana, and can be used to perform many security-related tasks, such as log analysis, Rootkit detection, listening port detection, file integrity detection and so on.
Elasticsearch
Elasticsearch will act as the log repository for the entire system. Elasticsearch is very powerful and has many functions. Often used in conjunction with Logstash (log collection) and Kibana (visualization). Elasticsearch provides a powerful platform for all types of data storage.
ElastAlert
ElastAlert is a project initiated by Yelp to provide an alarm mechanism for Elasticsearch. ElastAlert queries Elasticsearch through REST API and has multiple outputs to match the alarm.
TheHive
TheHive is an extensible, open source, free security emergency response platform designed to enable any security practitioner to easily handle security incidents and act quickly. In essence, TheHive is an alarm management platform for managing all event alarms.
Cortex
Cortex and TheHive are products developed by a team. Cortex uses an analyzer to obtain other data about metrics information in the log. It is allowed to query metrics such as IP, URL and file hash in third-party services, and use the results returned by third parties as additional information-rich alarm events.
MISP
MISP is an open source threat intelligence sharing platform maintained by CIRCL. Its Feed can be a paid subscription provided by an organization or an open source subscription maintained by the community, which is also the main source of data enrichment.
Elasticsearch installation
First deploy the Elasticsearch cluster, using Ubuntu 16.04 (the article uses a virtual machine installation, and DHCP reserves an address for the virtual machine to ensure that it always uses the same IP address).
The author provides Vagrantfile to help configure and build an Elasticsearch virtual machine.
ELK installation
Note: TheHive is doing some back-end refactoring, which can complicate setup. The development team behind TheHive believes that Elasticsearch no longer meets their needs and will use GraphDB on the back end after version 4.0. Both the current stable version 3.2.1 and the beta version 3.3.0 we use here use Elasticsearch 5.6 as the backend. Therefore, Elasticsearch, Logstash, and Kibana 6.6.1 need to be deployed as log repositories in the virtual machine, and another Elasticsearch 5.6.15 deployed in the TheHive virtual machine as the back end.
Although I have summarized the installation steps, you can check the installation guide if you need further details.
# first install Java, and choose to use OpenJDK to install sudo apt-get install openjdk-8-jre# to add keys and repository wget-qO-https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add-echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee-a / etc/apt/sources.list.d/elastic-6.x.listsudo apt-get update# to prevent updates from destroying SearchGuard Choose to install a fixed version of Elasticsearchapt-cache policy elasticsearchsudo apt-get install elasticsearch=6.6.1 logstash=1:6.6.1-1 kibana=6.6.1# to prevent software updates sudo apt-mark hold elasticsearch logstash kibana# set Elasticsearch, Logstash and Kibana to self-booting sudo systemctl daemon-reloadsudo systemctl enable elasticsearch.servicesudo systemctl enable logstash.servicesudo systemctl enable kibana.service# to determine compatibility in the future, you can upgrade sudo apt-mark unhold elasticsearch optimization
By default, the size of the Java heap that Elasticsearch depends on is 1 GB. You can increase it to 50-80% of the total memory by modifying the Xms1g and Xmx1g parameters in / etc/elasticsearch/jvm.options. My virtual machine only has 4GB memory, so I keep the default value.
Edit the configuration file / etc/elasticsearch/elasticsearch.yml:
Uncomment cluster.name and node.name and set them to different names
Set bootstrap.memory_lock to True
Set network.host to 0.0.0.0
Set discovery.type to single-node
Edit Service sudo systemctl edit elasticsearch.service:
[Service] LimitMEMLOCK=infinity
Then proceed with:
# reload sudo systemctl daemon-reload# restart sudo systemctl start elasticsearch.service# check to ensure that Elasticsearch is available curl http://localhost:9200/_cat/health
You should see a similar response:
1551641374 19:29:34 demo-cluster green 1 100 000-100.0%Kibana
Edit the configuration file for Kibana: sudo nano / etc/kibana/kibana.yml
Set Kibana to respond to the external interface: server.host: 0.0.0.0
Start the service: sudo systemctl start kibana.service
Open the browser: http://:5601 should see the Kibana console
Logstash
Follow the following command:
Sudo apt install logstashsudo systemctl enable logstash.servicesudo systemctl daemon-reload
Note: Logstash is not running at this time.
Wazuh installation
This section describes how to install Wazuh Manager and integrate Wazuh with Elasticsearch.
Use Wazuh's Agent and its rule set to identify the behavior of endpoints and generate alerts. These alerts are forwarded from Wazuh's Agent to Wazuh Manager write / var/ossec/logs/alerts/alerts.json. Filebeat's service constantly monitors changes to the file and forwards it to Elasticsearch.
Wazuh Manager# installs GPG key curl-s https://packages.wazuh.com/key/GPG-KEY-WAZUH for Wazuh warehouse | apt-key add-# add warehouse echo "deb https://packages.wazuh.com/3.x/apt/ stable main" | tee-a / etc/apt/sources.list.d/wazuh.list# upgrade sudo apt update# install sudo apt install wazuh-manager
Please refer to the official documentation for details.
Wazuh API
Wazuh API is required to connect to Kibana. The default account password for Wazuh API is foo/bar, and you can consult the documentation if you want to change it.
# install NodeJSsudo curl-sL https://deb.nodesource.com/setup_8.x | bash-sudo apt install nodejs# install Wazuh APIsudo apt install wazuh-api# cancel automatic update of sudo apt-mark hold wazuh-managersudo apt-mark hold wazuh-apiFilebeatcurl-s https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add-sudo echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | tee / etc/apt/ Sources.list.d/elastic-6.x.list sudo apt update sudo apt install filebeat=6.6.1
Install Filebeat first:
# cancel automatic update sudo apt-mark hold filebeat# download Filebeat configuration file sudo curl-so / etc/filebeat/filebeat.yml https://raw.githubusercontent.com/wazuh/wazuh/3.8/extensions/filebeat/filebeat.yml# edit configuration file sudo nano / etc/filebeat/filebeat.yml
Replace the YOUR_ELASTIC_SERVER_IP section at the end with the real IP of the Elasticsearch 6.6.1 server.
Start the Filebeat service:
Sudo systemctl daemon-reloadsudo systemctl enable filebeat.servicesudo systemctl start filebeat.service loads Wazuh template
Load the Wazuh template for Elasticsearch. Run this command on the Elasticsearch host: curl https://raw.githubusercontent.com/wazuh/wazuh/3.8/extensions/elasticsearch/wazuh-elastic6-template-alerts.json | curl-X PUT "http://localhost:9200/_template/wazuh"-H 'Content-Type: application/json'-d @-
Load Logstash configuration
Download the Logstash configuration file for Wazuh remote installation and run this command on the Elasticsearch host: curl-so / etc/logstash/conf.d/01-wazuh.conf https://raw.githubusercontent.com/wazuh/wazuh/3.8/extensions/logstash/01-wazuh-remote.conf
Make sure to run the Logstash service with the new configuration file: sudo systemctl restart logstash.service
Install Kibana
First try running the command provided in the Wazuh documentation: sudo-u kibana NODE_OPTIONS= "--max-old-space-size=3072" / usr/share/kibana/bin/kibana-plugin install https://packages.wazuh.com/wazuhapp/wazuhapp-3.8.2_6.6.1.zip
If an error is found:
Plugin installation was unsuccessful due to error "Command failed: / usr/share/kibana/node/bin/node / usr/share/kibana/src/cli-- env.name=production-- optimize.useBundleCache=false-- server.autoListen=false-- plugins.initialize=falseBrowserslist: caniuse-lite is outdated. Please run next command `npm update caniuse-lite browserslist`
Running the command npm update caniuse-lite browserslist produces an error because Node is not installed on this machine.
Uninstall the plug-in and run the command without the NODE option: sudo-u kibana/ usr/share/kibana/bin/kibana-plugin install https://packages.wazuh.com/wazuhapp/wazuhapp-3.8.2_6.6.1.zip. At this point the installation will be successful and then restart.
Install Wazuh Agent
Follow the instructions to install according to your own system. This article installs on Linux.
After installation, you need to edit the IP address of Wazuh Manager in ossec.conf. The Agent of Debian is installed in / var/ossec. Edit sudo nano / var/ossec/etc/ossec.conf and change MANAGER_IP.
Agent registration
We use a simple, unsafe way to register Agent:
On Manager, run the following command to start the registration service: / var/ossec/bin/ossec-authd
On Agent, run the agent-auth program using the IP address of Manager
For Linux: / var/ossec/bin/agent-auth-m
For Windows: C:\ Program Files (x86)\ ossec-agent\ agent-auth.exe-m
You should see the following output:
INFO: No authentication password provided.INFO: Connected to xxx.xxx.xxx.xxx:1515INFO: Using agent name as: xxxxxxxINFO: Send request to manager. Waiting for reply.INFO: Received response with agent keyINFO: Valid key created. Finished.INFO: Connection closed.Wazuh dashboard
When you connect to Kibana, you will find the icon of Wazuh in the left toolbar. Click this button to jump to the page where API is configured:
Username: fooPassword: barServer: http:// API Port: 55000
Save the API configuration and click to jump to the Overview page. Click the Agent at the top of the page to view the Agent with ID 001, that is, the previously registered host. It is considered normal when it appears as an Active state, otherwise you may need to ensure that the MANAGER_IP change is successful and restart Agent's service on the Agent.
test
In Kibana, drill down to Management > Elasticsearch > Index Management to see the index named wazuh-monitoring-3.x. Then go to Management > Kibana > Index Patterns, and if you haven't already defined the default index mode, click wazuh-monitoring, and then click the asterisk in the upper right corner to set it as the default.
Click Discover to view the events that have been created, which may not be available yet. Go back to the Elasticsearch Index Management page and wait for the index named wazuh-alerts to appear. We try to make it appear by generating an alarm.
As a test, go to another host and try to log in to the host through SSH with a fake user: ssh fakeuser@. This will trigger an invalid login attempt on the host's auth.log, which will be fetched by Wazuh Agent to generate a new entry in the newly created wazuh-alerts index. Now we have a warehouse where alarms are stored.
MISP deployment
Deploying MISP allows Cortex or any program that can initiate simple REST requests to query threat metrics, such as IP addresses, URL, and file hashes. MISP can independently add feeds for subscriptions and queries, and the information returned depends on the data provided by the feeds, and there are great differences between feeds. Some just provide lists of data, while others provide a lot of additional information.
Deploying MISP over Docker is much easier than installing it from source code, and the Harvard security team provides an example. Note: if you deploy it to a production environment, you should use build.sh so that you can change the default MySQL password and MISP_FQDN before building.
Initialize the MISP database: docker run-it--rm-v / docker/misp-db:/var/lib/mysql harvarditsecurity/misp / init-db. This starts the container, runs the script to populate the misp-db directory with the necessary database files, and finally deletes the container. If you look at the misp-db directory, you can see that the files have been added.
Generate SSL certificate
If the SSL certificate is not generated, Cortex cannot request MISP and generate the certificate as follows: sudo openssl req-x509-nodes-days 365-newkey rsa:2048-keyout / docker/certs/misp.key-out / docker/certs/misp.crt, all default options can be accepted if there are no special requirements.
Run the container docker run-it-d\-p 443 docker/misp-db:/var/lib/mysql 443\-p 80:80\-p 3306 docker/certs:/etc/ssl/private 3306\-v / docker/misp-db:/var/lib/mysql\ harvarditsecurity/misp
Open a browser to access https://localhost and use admin@admin.test and admin as the username and password. The password is required to be changed, and the new password contains at least 12 characters, including uppercase letters and special characters.
Configure MISP
Set MISP.live to TRUE and MISP.disable_emailing to TRUE in Administration > Server Settings and Maintenance > MISP Settings.
Select a subscription source in the list of Sync Actions > List Feeds to subscribe. I chose malwaredomainlist, select the check box and click Enable Feed at the top. You can see the feed in the list, click the down arrow to pull all the events, and you can view the tasks in progress at Administration > Jobs.
Click the magnifying glass icon to display a list of IP, copy any one of them, and then use it.
Postman
We use Postman to test API and set SSL certificate verification to OFF in File > Settings.
In MISP, copy the user's identity key in Administration > List Users. Set up three fields in Header in Postman:
Accept application/jsonContect-Type application/jsonAuthorization
Paste the user's identity key into the Authorization field.
At the top, change the REST command from GET to POST,API to: https://localhost/attributes/restSearch.
Change to the Body tab, click the Raw button and paste the following JSON to replace the value with the previously copied IP address:
{"returnFormat": "json", "value": "8.8.8.8"}
You should receive the following response:
{"response": {"Attribute": [{"id": "15", "event_id": "1", "object_id": "0", "object_relation": null, "category": "Network activity" "type": "ip-dst", "to_ids": false, "uuid": "5c8550db-5314-4538-a0d8-0146ac110002", "timestamp": "1552240859", "distribution": "0", "sharing_group_id": "0", "comment": " "deleted": false, "disable_correlation": false, "value": "23.253.130.80", "Event": {"org_id": "1", "distribution": "0", "id": "1" "info": "malwaredomainlist feed", "orgc_id": "1", "uuid": "5c8550db-2d90-425f-9bc5-0146ac110002"}}]}}
At this point, MISP is now ready to respond to query requests and prepare for the addition of Cortex. You can add more feeds by returning to the subscription list. In Administration > Scheduled Tasks, you can set fetch_feeds to 24 and click Update All to configure the scheduled pull task.
TheHive & Cortex
This article deploys TheHive 3.3.0 RC5 and Cortex stable v2.1.2, while TheHive version 4.1 (expected to be released in the second quarter of 2019) will remove Elasticsearch as the back end and use GraphDB instead.
Note: the subsequent installation of 3.3.0 stable version can also be used normally.
Install TheHive & Elasticsearch 5. Add echo 'deb https://dl.bintray.com/thehive-project/debian-beta any main' | sudo tee-a / etc/apt/sources.list.d/thehive-project.listecho "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee-a / etc/apt/sources.list.d/elastic-5.x.listwget-qO-https://artifacts.elastic.co/GPG -KEY-elasticsearch | sudo apt-key add-curl https://raw.githubusercontent.com/TheHive-Project/TheHive/master/PGP-PUBLIC-KEY | sudo apt-key add-# update and install the required Javasudo apt-get updatesudo apt-get install openjdk-8-jre# installation Elasticsearch 5.6.15 This is the latest version of sudo apt-get install elasticsearch# modification configuration file sudo nano / etc/elasticsearch/elasticsearch.ymlcluster.name: hivebootstrap.memory_lock: truediscovery.type: single-node# setup service self-startup sudo systemctl daemon-reloadsudo systemctl enable elasticsearchsudo systemctl start elasticsearch# check response curl http://localhost:9200# if installation succeeds continue to install TheHivesudo apt-get install thehive=3.3.0-0.1RC5# prevent version update apt- Mark hold elasticsearch thehive# edit configuration file sudo nano / etc/thehive/application.conf# uncomment And change the password # play.http.secret.key# Settings Service self-launch sudo systemctl daemon-reloadsudo systemctl enable thehivesudo systemctl start thehive
Open the browser and look at the website: http://:9000, you should see the database update message:
Click Update Database, and if you do not see the above message but see the login box, which means that the connection to Elasticsearch has been broken, please check the log\ var\ log\ thehive.
After the update is complete, you still have the opportunity to change the username and password for the administrator account. You can also check the Elasticsearch index named the_hive_14: curl http://127.0.0.1:9200/_cat/indices?v.
If you forget the administrator's account and password, please delete this index and start over.
Install Cortex
Note: some problems will be encountered when installing Cortex 3.0.0-RC1, but not when installing Cortex 2.1.3.
Install Cortex on the TheHive host:
Sudo apt-get install cortex=2.1.3-1sudo apt-mark hold cortex
There are some dependencies to install Cortex that need to be installed first:
Sudo apt-get install- y-- no-install-recommends python-pip python2.7-dev python3-pip python3-dev ssdeep libfuzzy-dev libfuzzy2 libimage-exiftool-perl libmagic1 build-essential git libssl-devsudo pip install- U pip setuptools & & sudo pip3 install- U pip setuptools install Cortex Analyzer
Pull the source code from GitHub and install requirements.txt 's dependencies separately for each Analyzer:
Cd / etc/cortexgit clone https://github.com/TheHive-Project/Cortex-Analyzers
This article can be downloaded from: / etc/cortex:
# change folder permissions chown-R root:cortex Cortex-Analyzers# to all Analyzer installations dependent (two lines are executed as one command) for I in $(find Cortex-Analyzers-name 'requirements.txt'); do sudo-H pip2 install-r $I; done & &\ for I in $(find Cortex-Analyzers-name' requirements.txt'); do sudo-H pip3 install-r $I | | true; done
Installation dependencies inevitably lead to some errors, and a good way to solve this problem is to choose which Analyzer to install to prevent dependency conflicts. Modify the configuration file application.conf of Cortex to point to the Cortex-Analyzers directory: sudo nano / etc/cortex/application.conf. Uncomment # play.http.secret.key and change the password. Locate the location of # # ANALYZERS and change it to / etc/cortex/Cortex-Analyzers/analyzers.
Start Cortex:
Sudo systemctl enable cortexsudo systemctl start cortex
Open a browser to check if the installation is successful: http://:9001. Update the database and create an administrator user login, just as you did when installing TheHive.
Cortex stipulates that you must log in under the organization account to enable and manage Analyzer, otherwise you can only create organizations and users.
Click + Add Organization to create a new organization, switch to the Users tab, click + Add User to create a new user, and assign the new user to the created organization and assign the OrgAdmin role. After saving, click New Password to set the password for the newly created user, and press enter to save. Then log out of your account and log in with a new user. Click the Organization tab at the top and click the Analyzers subtab instead of the blue Analyzers tab at the top. If Cortex is configured correctly, you should see Analyzers with 113 available Analyzer installed according to my configuration.
The following Analyzer can be enabled to accept the default configuration:
Abuse_Finder_2_0CyberCrime-Tracker_1_0Cyberprotect_ThreatScore_1_0DShield_lookup_1_0MISP_2_0URLhaus_2_0Urlscan_io_Search_0_1_0
None of this requires an API key or further configuration except for MISP. Click the Users subtab and create a new user and TheHive integration. The user should be assigned the read & analyze role, this time without setting a password, click Create API Key and copy the key.
Click + New Analysis at the top of the page:
Never mind TLP and PAP
Change the data type to IP
Add 8.8.8.8
Check the box next to the Analyzer you enabled
Click to start
Modify the application.conf of TheHive to point to Cortex:sudo nano / etc/thehive/application.conf. Scroll to the bottom, find the # Cortex section, and uncomment play.modules.enabled + = connectors.cortex.CortexConnector.
Add API key and URL:
Play.modules.enabled + = connectors.cortex.CortexConnectorcortex {"CORTEX-SERVER-ID" {url = "http://127.0.0.1:9001" key =" wrXichGSPy4xvjpWVdeQoNmoKn9Yxnsn "# # HTTP client configuration (SSL and proxy) # ws {}
Restart the server and the two services will be ready for use when they start again. Click + New Case to test Cortex at TheHive:
Give the event a name and description, then open the event, click the Observables tab, and click + Add Observable to set Type = IP, Value = 1.1.1.1, and Tag to test. You only need to provide tag or description, not both.
Click the IP address in the Observable list, which opens a new tab containing the relevant data, and you can also see the Analyzer at the bottom:
Click Run All, and if you return to Cortex, you will see Analyzer running in the Job History tab. Back at TheHive, Analyzer should now have the time and date for the final analysis. Return to the Observables tab and refresh the page, and you should see a list of tags under Observables:
Import report template
Download the report template package at https://dl.bintray.com/thehive-project/binary/report-templates.zip. Log in to TheHive using the administrator account, click Admin > Report templates to select Import templates, and select the downloaded package.
Now, when you click the last analysis time in Observables, you will get a report containing the analysis results:
Enable MISP
Click Administration > Add User on the MISP page:
Assign an email to the user, cortex@admin.test
Add users to the ORGNAME organization
Assign roles user
Cancel all check boxes at the bottom
Copy the user's API key
On the Cortex page, click Organization > Analyzers, enter misp in the search box, and then enable MISP_2_0:
Provide a description for the MISP server
URL = https://
Key = AuthKey from MISP user you created
Cert_check: False
Now go back to the MISP page and click Sync Actions > List Feeds. Find one of the feeds, click on the right magnifying glass, select an IP from the list and copy it.
Click + New Analysis in Cortex, add the data type of IP, and then paste the copied IP address. Select Analyzer for MISP_2_0 to run. Click View on the Job History page and you will see the copied IP list name and other information. You can add this IP to TheHive to test for Observables. TheHive, MISP, and Coretx are now integrated.
ElastAlert
The final step is to install ElastAlert to generate alerts from events in Elasticsearch. The current version of ElastAlert requires Python 2.7.This article is installed on the Elasticsearch host:
Sudo apt install python-pippip install elastalert
The installation location is at: / home/username/.local/bin/elastalert. Note: ElastAlert is also provided by Docker image.
Configure ElastAlert
Create a directory to store the configuration and rules: mkdir-p ~ / elastalert/rules. You can pull the public configuration file or make your own configuration file, copy the following necessary settings, and save it as ~ / elastalert/config.yaml:
Rules_folder: / home/username/elastalert/rulesrun_every: minutes: 1buffer_time: minutes: 15es_host: x.x.x.xes_port: 9200use_ssl: Falsewriteback_index: elastalert_statusalert_time_limit: days: 2
Run elastalert-create-index to create the necessary indexes in Elasticsearch. The following results can be obtained:
Elastic Version:6Mapping used for string: {'type':' keyword'} New index elastalert_status createdDone! Create a rule
Click Admin > Users on the TheHive page to create a user named elastalert, but not assign a role to it, and select elastalert. Click Create API Key and copy the API key. TheHive's administration guide states: "for better auditing, once a user is created, the user cannot be deleted and the account can only be locked."
Each rule defines the query to be executed, triggering matching parameters and a list of alerts triggered by each match. In this article, create a rule to identify failed SSH logins, and edit the rule file nano ~ / elastalert/rules/failed_ssh_login.yaml:
Es_host: x.x.x.xes_port: 9200name: SSH Failed Logintype: frequencyindex: wazuh-alerts-3.x-*num_events: 2timeframe: hours: 1 filter term: rule.id: "5710" alert: hivealerterhive_connection: hive_host: http://x.x.x.x hive_port: 9000 hive_apikey: hive_alert_config: type: 'external' source:' elastalert' description:'{rule [name]}' Severity: 2 tags: ['{rule [name]}' '{match [agent] [ip]}','{match [predecoder] [program_name]}'] tlp: 3 status: 'New' follow: Truehive_observable_data_mapping:-ip: "{match [SRC _ ip]}"
Notice the part that maps Observables to Types at the end. Although you can use nested field names ({match [data] [srcip]}) in label fields, it doesn't seem to work for hive_observable_data_mapping. You can only use a single field name ('{match [srcip]}').
We need to modify the 01-wazuh.conf configuration file on the Logstash host to solve this problem. Modify the part of the 01-wazuh.conf file [data] [srcip] filter on the Logstash host by changing add_field = > ["@ src_ip", "% {[data] [srcip]}"] to add_field = > ["src_ip", "% {[data] [srcip]}"]. For the part of the geoip filter, change source = > "@ src_ip" to source = > "src_ip".
You should now have the src_ip field in the log, which can be verified by Kibana. On the Kibana page, select Management > Kibana > Index Patterns, select the wazuh-alerts index mode, and click Refresh to enable the new field:
# Test rule elastalert-test-rule ~ / elastalert/rules/failed_ssh_login.yaml# run ElastAlertelastalert-- verbose-- config ~ / elastalert/config.yaml
Generate some alerts for hosts running Wazuh Agent: run the following command three times in a row: ssh invaliduser@serverip.
At this point, you should see alerts displayed in Kibana, and ElastAlert can receive these alerts the next time it runs:
INFO:elastalert:Ran SSH Failed Login from 2019-03-31 18:21 UTC to 2019-04-02 15:01 UTC: 3 query hits (0 already seen), 1 matches, 1 alerts sent
Generate a new alarm under the Alerts of TheHive:
Click the Page icon on the right to preview the alarm, assign it a template, and then import it:
By this time, all the tasks have been completed, and the alerts generated by Wazuh can be displayed as events in TheHive.
Thank you for reading this article carefully. I hope the article "how to use Elasticsearch and TheHive to build an open source security emergency response platform" shared by the editor will be helpful to you. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.