In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
Most people do not understand the knowledge points of this article "how to install Elastic Stack on Centos7", so the editor summarizes the following content, detailed content, clear steps, and has a certain reference value. I hope you can get something after reading this article. Let's take a look at this "how to install Elastic Stack on Centos7" article.
ElasticStack is a collection of open source products, including Elasticsearch, Kibana, Logstash and Beats, which can safely and reliably obtain data from any source and format, and can search, analyze and visualize the data in real time.
Environment conditions required for Elastic Stack 64-bit CentOS 7 GB memory-64-bit CentOS 7, 1 GB memory for elk host computer-client 164,1 GB memory-client 2 step 1-operating system initialization
Disable SELinux on a CentOS 7 server
We will disable SELinux on the CentOS 7 server. Edit the SELinux configuration file.
Vim / etc/sysconfig/selinux changes the value of SELINUX from enforcing to disabledSELINUX=disabled and then restarts the server: reboot logs in to the server again and checks the SELinux status. Getenforcedisabled step 2-install the Java environment
Deployment of Elastic stack depends on Java,Elasticsearch, which requires version 8 of Java, and Oracle JDK 1.8 is recommended. Install Java 8 from the official Oracle rpm package.
Wget http://download.oracle.com/otn-pub/java/jdk/8u77-b02/jdk-8u77-linux-x64.rpm\\ download the version of java8 rpm-ivh jdk-8u77-linux-x64.rpm\\ rpm install the jdk environment java-version\\ View the version of java check whether it works step 3-install and configure Elasticsearch
In this step, we will install and configure Elasticsearch. Install Elasticsearch from the rpm package provided by the elastic.co website and configure it to run on localhost (to ensure that the program is secure and inaccessible from the outside).
Add the key of elastic.co to the server
The elastic.co website is a https website (private certificate). We need to add the certificate key to download safely.
Rpm-- import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Download and install Elasticsearch 5.1
Wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.rpmrpm-ivh elasticsearch-5.1.1.rpm
We edit the configuration file after the installation is complete
Profile name: elasticsaerch.yml
Cd / etc/elasticsearch/vim elasticsearch.ymlbootstrap.memory_lock: true\\ remove the comment on line 40 and enable Elasticsearch's memory lock. This disables memory swapping for Elasticsearch. Network.host: localhosthttp.port: 9200\\ in the Network block, uncomment the network.host and http.port lines.
Edit the memory lock configuration of the elasticsearch.service file.
Vim / usr/lib/systemd/system/elasticsearch.serviceMAX_LOCKED_MEMORY=unlimited\\ remove the comment on line 60 and make sure the value is unlimited.
Set up service startup
Elasticsearch listens for port number 9200, enables mlockall on the CentOS server to disable memory swapping, sets Elasticsearch to boot automatically, and then starts the service.
Sudo systemctl daemon-reloadsudo systemctl enable elasticsearchsudo systemctl start elasticsearch
Check the external listening port:
Netstat-plntu
The memory lock enables mlockall to check whether Elasticsearch is running.
Curl-XGET 'localhost:9200/_nodes?filter_path=**.mlockall&pretty'curl-XGET' localhost:9200/?pretty' step 4-install and configure Kibana and Nginx
Install Kibana first, then install nginx, and finally set up nginx reverse proxy kibana
Install and configure Kibana
Wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpmrpm-ivh kibana-5.1.1-x86_64.rpm
Edit the Kibana configuration file.
Vim / etc/kibana/kibana.yml looks for the following three lines in the configuration file to modify the configuration server.port: 5601server.host: "localhost" elasticsearch.url: "http://localhost:9200"
Set Kibana to boot
Sudo systemctl enable kibanasudo systemctl start kibana
Check Kibana external listening port 5601 to make sure it starts properly.
Netstat-plntu
Install and configure the nginx server
The yum package for the yum-y install epel-release nginx service can be found in the epel package for direct yum installation yum-y install nginx httpd-tools
The httpd-tools package contains tools for the Web server to add htpasswd basic authentication to Kibana.
Edit the Nginx configuration file and delete the server {} module so that we add a new virtual host configuration.
Cd / etc/nginx/vim nginx.conf\\ Delete the server {} block.
Create a virtual host for kibana.conf:
Vim / etc/nginx/conf.d/kibana.confserver {listen 80; server_name elk-stack.co; auth_basic "Restricted Access"; auth_basic_user_file / etc/nginx/.kibana-user; location / {proxy_pass http://localhost:5601; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host Proxy_cache_bypass $http_upgrade;}}
Use the htpasswd command to create a new basic authentication file.
Sudo htpasswd-c / etc/nginx/.kibana-user admin "enter your password"
Start Nginx.
Nginx-tsystemctl enable nginxsystemctl start nginx step 5-install and configure Logstash
In this step, we will install Logstash and configure it to centralize the server's logs from the filebeat-configured logstash client, then filter and transform the Syslog data, and move it to the storage center (Elasticsearch).
Download Logstash and install it using rpm.
Wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.rpmrpm-ivh logstash-5.1.1.rpm
Generate a new SSL certificate file so that the client can identify the elastic server.
Cd / etc/pki/tls\\ enter the tls directory and edit the openssl.cnf file. Vim openssl.cnf adds the server identity in the [v3_ca] section. [v3_ca] # Server IP AddresssubjectAltName = IP: 10.0.15.10
Use the openssl command to generate a certificate file.
Openssl req-config / etc/pki/tls/openssl.cnf-x509-days 3650-batch-nodes-newkey rsa:2048-keyout / etc/pki/tls/private/logstash-forwarder.key-out / etc/pki/tls/certs/logstash-forwarder.crt
The certificate file can be found in the / etc/pki/tls/certs/ and / etc/pki/tls/private/ directories.
Next, we will create a new configuration file for Logstash. Create a new filebeat-input.conf file to configure the log source for filebeat, then create an syslog-filter.conf configuration file to process syslog, and then create an output-elasticsearch.conf file to define the output log data to Elasticsearch.
Go to the logstash configuration directory and create a new configuration file in the conf.d subdirectory.
Cd / etc/logstash/vim conf.d/filebeat-input.conf enter the configuration and paste the following configuration: input {beats {port = > 5443ssl = > truessl_certificate = > "/ etc/pki/tls/certs/logstash-forwarder.crt" ssl_key = > "/ etc/pki/tls/private/logstash-forwarder.key"}}
Create a syslog-filter.conf file
Vim conf.d/syslog-filter.conf pastes the following configuration: filter {if [type] = = "syslog" {grok {match = > {"message" = > "% {SYSLOGTIMESTAMP:syslog_timestamp}% {SYSLOGHOST:syslog_hostname}% {DATA:syslog_program} (?:\ [% {POSINT:syslog_pid}\]):% {GREEDYDATA:syslog_message}"} add_field = > ["received_at", "% {@ timestamp}"] add_field = > ["received_from" "% {host}"]} date {match = > ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss"]} We use a filter plug-in named grok to parse the syslog file.
Create an output profile output-elasticsearch.conf.
Vim conf.d/output-elasticsearch.confoutput {elasticsearch {hosts = > ["localhost:9200"] hosts = > "localhost:9200" manage_template = > falseindex = > "% {[@ metadata] [beat]} -% {+ YYYY.MM.dd}" document_type = > "% {[@ metadata] [type]}"}}
Start the logstash service
Sudo systemctl enable logstashsudo systemctl start logstash step 6-install and configure Filebeat on the CentOS client
Beat, as a data sender, is a lightweight agent that can be installed on a client node, sending large amounts of data from the client to the Logstash or Elasticsearch server. There are four types of beat,Filebeat for sending "log files", Metricbeat for sending "metrics", Packetbeat for sending "network data", and Winlogbeat for sending "event logs" for Windows clients.
In this tutorial, I'll show you how to install and configure Filebeat to transfer data log files to a Logstash server over a SSL connection.
Log in to the server of client 1. Then copy the certificate file from the elastic server to the server on client 1.
Ssh root@client1IPscp root@elk-serverIP:~/logstash-forwarder.crt .sudo mkdir-p / etc/pki/tls/certs/mv ~ / logstash-forwarder.crt / etc/pki/tls/certs/
Next, import the elastic key on the client 1 server.
Rpm-- import https://artifacts.elastic.co/GPG-KEY-elasticsearch downloads Filebeat and installs it with the rpm command. Wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-x86_64.rpmrpm-ivh filebeat-5.1.1-x86_64.rpm
Filebeat has been installed, please go to the configuration directory and edit the filebeat.yml file.
Cd / etc/filebeat/vim filebeat.yml\\ add a new log file in the path section of line 21. We will create two files to record the / var/log/secure file for ssh activity And the server log / var/log/messages: paths:- / var/log/secure- / var/log/messages\\ add a new configuration on line 26 to define the syslog type file: document-type: syslog\\ add comments on lines 83 and 85 to disable Elasticsearch output Change to Logshtash:-#-- Elasticsearch output- -# output.elasticsearch:# Array of hosts to connect to.# hosts: ["localhost:9200"]- -- now add a new logstash output configuration-- output.logstash:# The Logstash hostshosts: ["10.0.15.10 bulk_max_size 5443"] bulk_max_size: 1024ssl.certificate_authorities ["/ etc/pki/tls/certs/logstash-forwarder.crt"] template.name: "filebeat" template.path: "filebeat.template.json" template.overwrite: false-
❝
PS:Filebeat defaults to Elasticsearch as the output destination. In this tutorial, we change it to Logshtash.
Set Filebeat to boot and boot.
Sudo systemctl enable filebeatsudo systemctl start filebeat step 7-install and configure Filebeat on the Ubuntu client
Copy the certificate file from the server
Ssh root@ubuntu-clientIPscp root@elk-serverIP:~/logstash-forwarder.crt .sudo mkdir-p / etc/pki/tls/certs/mv ~ / logstash-forwarder.crt / etc/pki/tls/certs/
Import the elastic key on the server.
Wget-qO-https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add-
Download the Filebeat .deb package and install it using the dpkg command.
Wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-amd64.debdpkg-I filebeat-5.1.1-amd64.deb
Go to the configuration directory and edit the filebeat.yml file.
Cd / etc/filebeat/vim filebeat.yml\\ add a new log file in the path section of line 21. We will create two files to record the / var/log/secure file for ssh activity And the server log / var/log/messages: paths:- / var/log/secure- / var/log/messages\\ add a new configuration on line 26 to define the syslog type file: document-type: syslog\\ add comments on lines 83 and 85 to disable Elasticsearch output Change to Logshtash:-#-- Elasticsearch output- -# output.elasticsearch:# Array of hosts to connect to.# hosts: ["localhost:9200"]- -- now add a new logstash output configuration-- output.logstash:# The Logstash hostshosts: ["10.0.15.10 bulk_max_size 5443"] bulk_max_size: 1024ssl.certificate_authorities ["/ etc/pki/tls/certs/logstash-forwarder.crt"] template.name: "filebeat" template.path: "filebeat.template.json" template.overwrite: false-
❝
PS:Filebeat defaults to Elasticsearch as the output destination. In this tutorial, we change it to Logshtash.
Set Filebeat to boot and boot.
Sudo systemctl enable filebeatsudo systemctl start filebeat
Check the service status:
Systemctl status filebeat step 8-Test
Open your web browser and access the elastic stack domain name you configured in Nginx. Mine is "elk-stack.co". Log in with the administrator password and press Enter to log in to the Kibana dashboard.
Create a new default index filebeat-*, and click the create button.
The default index has been created. If there is more than one beat on the elastic stack, you can configure the default beat with one click on the Star button.
Go to the Discovery menu and you can see all the log files on the elk-client1 and elk-client2 servers.
Example JSON output from an invalid ssh login in the elk-client1 server log.
With other options, you can do more with the Kibana dashboard.
The above is about the content of this article on "how to install Elastic Stack on Centos7". I believe we all have a certain understanding. I hope the content shared by the editor will be helpful to you. If you want to know more about the relevant knowledge, please follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.