Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

ELK big data Analysis course

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

ELK big data Analysis course

Document from: Guangtong College version: 1.0

QQ:430696786 WeChat account: winlone

Official information:

Beats file collection: https://www.elastic.co/products/beats

Logstash log analysis: https://www.elastic.co/products/logstash

Elasticsearch journal storage: https://www.elastic.co/products/elasticsearch

Kibana log display interface: https://www.elastic.co/products/kibana

I. Overview:

Beats: collectors distributed in each application server

The Beats platform combines a variety of single-purpose data collectors that can be installed as lightweight agents to send data from hundreds or thousands of machines to Logstash or Elasticsearch.

Logstash: receives data from Beats, analyzes the data, and sends it to the place where the log is stored

Logstash is a lightweight log collection and processing framework, which can easily collect scattered and diversified logs, customize them, and then transfer them to a specified location, such as a server or file.

Elasticsearch: used to store formatted statistics

Elasticsearch is a Lucene-based search and data analysis tool that provides a distributed service. Elasticsearch is an open source product that complies with the open source terms of Apache and is currently the mainstream enterprise search engine.

Kibana: show the statistics in the form of charts

Kibana is an open source data analysis and visualization platform. It is a member of Elastic Stack and is designed to collaborate with Elasticsearch. You can use Kibana to search, view, and interact with the data in the Elasticsearch index. You can easily use charts, tables and maps to analyze and present the data.

2. Installation:

2.1 preparation work

* system environment: Centos7.4

* install prerequisite software:

Yum install java-1.8.0-openjdk

* create a new directory:

Mkdir / toolsmkdir / tools/install/mkdir / tools/download/

* New es user:

Groupadd es # add es group useradd es-g es-p es # add es users and attach to es group chown-R es:es / usr/lib/jvm/ # java also need to assign es user access rights chown-R es:es / tools/ # installation directory assignment permissions su es # switch to es user

2.2 install Elasticsearch

2.2.1 Elasticsearch download:

Cd / tools/download/wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.3.tar.gz

2.2.2 extract the Elasticsearch package:

Tar-xzf elasticsearch-6.4.3.tar.gzmv / tools/download/elasticsearch-6.4.3 / tools/install/elasticsearch-6.4.3chown-R es:es / tools/install/elasticsearch-6.4.3

2.2.3 configuration of Elasticsearch

* enter the elasticsearch-6.4.3 directory

Cd / tools/install/elasticsearch-6.4.3

* set jvm virtual memory:

Vi config/jvm.options-Xms512m-Xms512m

* elasticsearch.yml configuration:

Vi config/elasticsearch.yml

If the remote host needs to access the elasticsearch service, you need to configure the egress ip and port:

Specify the path: you can not specify the data and logs of the current path by default. Configure the permissions of the es group after specifying the path.

Configuration download:

* start elasticsearch

. / bin/elasticsearch-d

* stop elasticsearch

Ps-ef | grep elasticsearchkill-9 process id

* several errors that may occur when starting the configuration service

1. Max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

Vi / etc/security/limits.conf

Join:

Es soft nofile 65536es hard nofile 65536

Execute the command: ulimit-Hn

2. Max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

3.max number of threads [1024] for user [elasticsearch] is too low, increase to at least [2048]

Vi / etc/security/limits.d/90-nproc.conf

4. System call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk

Bootstrap.system_call_filter: false

2.2.4 access address for successful startup

Curl 'http://localhost:9200/?pretty'

2.3 install Kibana

2.3.1 Kibana download:

Cd / tools/download/wget https://artifacts.elastic.co/downloads/kibana/kibana-6.4.3-linux-x86_64.tar.gz

2.3.2 extract the Kibana package:

Tar-xzf kibana-6.4.3-linux-x86_64.tar.gzmv / tools/download/kibana-6.4.3-linux-x86_64 / tools/install/kibana-6.4.3-linux-x86_64chown-R es:es / tools/install/kibana-6.4.3-linux-x86_64

2.3.3 Kibana configuration:

* configure Kibana and server.host address, which requires remote connection, so you can configure the private network IP of the server.

Vi config/kibana.yml

The configuration is as follows:

Server.port: 5601server.host: "172.16.151.119" elasticsearch.url: "http://localhost:9200"

Configuration download:

* start kibana

Nohup. / bin/kibana &

* visit kibana

Access: http://172.16.151.119:5601 public network: http:// public network IP:5601

2.4 install Filebeat

2.4.1 Filebeat download:

Cd / tools/download/wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.3-linux-x86_64.tar.gz

2.4.2 extract the Filebeat package:

Tar-xzf filebeat-6.4.3-linux-x86_64.tar.gzmv / tools/download/filebeat-6.4.3-linux-x86_64 / tools/install/filebeat-6.4.3-linux-x86_64chown-R es:es / tools/install/filebeat-6.4.3-linux-x86_64

2.4.3 configure the filebeat.yml file

# = = Filebeat inputs = = filebeat.inputs:- type: log enabled: true paths:-/ data/logs/*.log fields_under_root: true fields: alilogtype: applog#== Filebeat modules = = filebeat.config.modules: path: ${path.config} / modules.d/*.yml reload.enabled: false#= Elasticsearch template setting = = setup.template.settings: index.number_of_shards: 3 Mr. Murray- -Logstash output-- output.logstash: hosts: ["localhost:5044"]

Configuration download:

2.4.4 start filebeat

Nohup. / filebeat-c. / filebeat.yml &

2.5 install logstash

2.5.1 logstash download:

Cd / tools/download/wget https://artifacts.elastic.co/downloads/logstash/logstash-6.4.3.tar.gz

2.5.2 extract the logstash package:

Tar-xzf logstash-6.4.3.tar.gzmv / tools/download/logstash-6.4.3 / tools/install/logstash-6.4.3chown-R es:es / tools/install/logstash-6.4.3

2.5.3 configure the logstash.conf file

Vi config/logstash.conf

The configuration is as follows:

Input {beats {port = > 5044} filter {grok {patterns_dir = > ["/ tools/install/logstash-6.4.3/patterns"] match = > {"message" = > "% {IPORHOST:forwordip}. *% {HTTPDATE:logtime}. *"} remove_field = > ["message", "host", "tags" "input", "@ timestamp", "offset", "host", "@ version", "beat"]}} output {elasticsearch {hosts = > ["http://localhost:9200"] index = >" beat-mylog "# user = >" elastic "# password = >" changeme "} stdout {codec = > rubydebug}}

Configuration download:

Regular configuration:

2.5.4 with jvm.options virtual memory

-Xms1g-Xmx1g

2.5.5 configure the logstash.yml file

Vi config/logstash.ymlnode.name: testpipeline.id: mainpipeline.workers: 2pipeline.batch.size: 125pipeline.batch.delay: 50pipeline.unsafe_shutdown: falsepath.config: / tools/install/logstash-6.4.3/config/logstash.confhttp.host: "127.0.0.1" http.port: 9600-9700

Configuration download:

Set up

Description

Default value

Node.name

Descriptive name of the node

Hostname of the machine

Path.data

Directory where Logstash and its plug-ins are used for any persistent requirements

LOGSTASH_HOME/data

Pipeline.id

ID of the pipe

Main

Pipeline.workers

The number of workers who will perform the filtering and output phases of the pipeline in parallel. If you find that the event is being backed up or the CPU is not saturated, please consider increasing this number to make better use of machine processing power.

Number of host CPU cores

Pipeline.batch.size

The maximum number of events that a single worker thread will collect from the input before attempting to execute the filter and output. A larger batch size is usually more efficient, but at the cost of increased memory overhead, you may need to increase the JVM heap space in the jvm.options configuration file. For more information, see the Logstash configuration file.

one hundred and twenty five

Pipeline.batch.delay

When creating a pipe event batch, how many milliseconds do you wait for each event before sending a smaller batch to the pipeline staff

fifty

Pipeline.unsafe_shutdown

When set to true, Logstash will be forced to exit during shutdown even if there are still free events in memory. By default, Logstash will refuse to exit until all received events are pushed to output. Enabling this option may result in data loss during shutdown.

False

Path.config

The Logstash configuration path of the main pipe. If you specify a directory or wildcard, the configuration file will be read from the directory in alphabetical order

Platform-specific, see Logstash directory layout

Config.string

A string containing the pipe configuration to be used for the main pipe, using the same syntax as the configuration file

None

Config.test_and_exit

When set to true, check whether the configuration is valid, and then exit. Note that there is no check for the correctness of grok mode in this setting. Logstash can read multiple configuration files from a directory. If you combine this setting with log.level: debug, Logstash will log the merged configuration file and annotate each configuration block with the source file it comes from.

False

Config.reload.automatic

When set to true, periodically check whether the configuration has been changed and reload the configuration when the configuration is changed, which can also be triggered manually by SIGHUP signal

False

Config.reload.interval

How often does Logstash check the configuration file to see changes

3s

Config.debug

When set to true, the fully compiled configuration is displayed as debug log messages, and you must also set log.level: debug, warning: log messages will contain any password options passed to the plug-in configuration, which may cause plaintext passwords to appear in the log!

False

Config.support_escapes

When set to true, the string in quotation marks will handle the following escape sequence:\ nbecome literal newline character (ASCII 10),\ r become literal carriage return (ASCII 13),\ t become literal tab character (ASCII 9), become literal backslash\,\ "become a literal double quotation mark,\ 'become literal quotation mark

False

Modules

When configured, the modules must be in the nested YAML structure described in the table above

None

Queue.type

Internal queue model for event buffering, specifying memory for memory-based legacy queues, or persisted disk-based ACKed queues (persistent queues)

Memory

Path.queue

Directory path where data files are stored when persistent queues are enabled (queue.type: persisted)

Path.data/queue

Queue.page_capacity

The size of the page data file used when enabling persistent queues (queue.type: persisted), which consists of appended-only data files separated into pages

64mb

Queue.max_events

Maximum number of unread events in the queue when persistent queues are enabled (queue.type: persisted)

0 (Infinite)

Queue.max_bytes

The total capacity of the queue (in bytes) to ensure that the capacity of the disk drive is greater than the value specified here, and if both queue.max_events and queue.max_bytes are specified, Logstash uses any standard that is met first

1024mb (1g)

Queue.checkpoint.acks

When persistent queuing is enabled, the maximum number of ACKed events (queue.type: persisted) before the checkpoint is enforced, specifying queue.checkpoint.acks: 0 to set this value to unlimited

1024

Queue.checkpoint.writes

The maximum number of write events before a checkpoint is enforced when persistent queues are enabled (queue.type: persisted), specifying queue.checkpoint.writes: 0 to set this value to unlimited

1024

Queue.drain

When enabled, Logstash will not shut down until the persistent queue is exhausted.

False

Dead_letter_queue.enable

The tag indicates that Logstash uses the DLQ features supported by the plug-in

False

Dead_letter_queue.max_bytes

The maximum size of each dead letter queue. If the entry will increase the size of the dead letter queue, delete the entry if it exceeds this setting

1024mb

Path.dead_letter_queue

The directory path where the dead letter queue data files are stored

Path.data/dead_letter_queue

Http.host

Binding address of the metric REST endpoint

"127.0.0.1"

Http.port

The binding port of the metric REST endpoint

9600

Log.level

Log level. Valid options are: fatal, error, warn, info, debug, trace

Info

Log.format

Log format, set to json log in JSON format, or plain uses Object#.inspect

Plain

Path.logs

The directory to which Logstash writes its logs

LOGSTASH_HOME/logs

Path.plugins

Where to find a custom plug-in, you can specify this setting multiple times to include multiple paths, and the plug-in should be in a specific directory hierarchy: PATH/logstash/TYPE/NAME.rb,TYPE is inputs, filters, outputs, or codecs,NAME is the name of the plug-in

Platform-specific, see Logstash directory layout

2.5.6 logstash start

Nohup. / bin/logstash-f. / config/logstash.conf &

Parameters.

Description

Give an example

-e

Execute immediately and start the instance using the configuration parameters in the command line

. / bin/logstash-e "input {stdin {}} output {stdout {}"

-f

Specify the configuration file to launch the instance

. / bin/logstash-f config/test.conf

-t

Test the correctness of the configuration file

. / bin/logstash-f config/test.conf-t

-l

Specify the log file name

. / bin/logstash-f config/test.conf-l logs/test.log

-w

Specifies the number of filter threads. The default number is 5.

. / bin/logstash-f config/test.conf-w 8

2.5.7 related address

Input snooping: 0.0.0.0:5044Logstash API endpoint: http://127.0.0.1:9600

Visit: http://127.0.0.1:9600

{"host": "iZbp1f69c0ocoflj7y2nlxZ", "version": "6.4.3", "http_address": "127.0.0.1 http_address", "id": "de36d17d-ca9d-4123-a33b-c2b9af00dcd9", "name": "test", "build_date": "2018-10-31T00:19:35Z", "build_sha": "17e7a50dfb0beb05f5338ee5a0e8338e68eb130b", "build_snapshot": false}

3. Format the log:

3.1 configuration of nginx log parameters:

Nginx log output parameter configuration:

Log_format main'$remote_addr-$remote_user [$time_local] "$request"'

'$status $body_bytes_sent $request_body "$http_referer''

'"$http_user_agent"$http_x_forwarded_for" $request_time'

Nginx log content:

* POST:

219.135.135.2-[29/Nov/2018:18:41:13 + 0800] "POST / portal/users/cartList HTTP/1.1" 3025? appName=luckwine-portal-web&channelCode=01&traceId=1712884f4c9e4d8ad9bdb843824dd197& "http://47.106.219.117:8010/home?a=1"" Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:63.0) Gecko/20100101 Firefox/63.0 "-" 0.008

* GET:

219.135.135.2-[29/Nov/2018:18:16:32 + 0800] "GET / portal/common/needLogin?dd=23 HTTP/1.1" 20096-"http://47.106.219.117:8010/home?a=1"" Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:63.0) Gecko/20100101 Firefox/63.0 "-" 0.002

Nginx log parameters:

The parameter value $query_string # in the $args # request is the same as the value of NAME in the $args$arg_NAME # GET request $is_args # if there is a parameter in the request, the value is "?", otherwise it is the empty string $uri # the current URI in the request (without request parameters, which are located at $args) It can be different from the value of $request_uri passed by the browser, which can be redirected internally or modified using the index directive. $uri does not contain a hostname, such as "/ foo/bar.html". $document_uri # is the same as $uri$document_root # the currently requested document root or alias $host # priority: hostname of the HTTP request line > "HOST" request header field > server name that matches the request. The host header field in the request, if the host header in the request is not available, the server name that processes the request for the server $hostname # hostname $https # if SSL security mode is turned on, the value is "on", otherwise the empty string. The binary form of $binary_remote_addr # client address with a fixed length of 4 bytes $body_bytes_sent # the number of bytes transferred to the client, excluding the response header This variable is compatible with the "% B" parameter in Apache's mod_log_config module $bytes_sent # number of bytes transferred to the client $connection # TCP connection sequence number $connection_requests # TCP connection current number of requests $content_length # "Content-Length" request header field $content_type # "Content-Type" request header field $cookie_name # cookie name $limit_rate # used to set the speed limit of the response $msec # current Unix timestamp $nginx_version # nginx version $pid # PID$pipe of the worker process # if the request comes from pipeline communication The value is "p", otherwise it is "." $proxy_protocol_addr # gets the client address of the proxy access server, or, in the case of direct access, the empty string $realpath_root # the real path to the currently requested document root or alias All symbolic connections are converted to the real path $remote_addr # client address $remote_port # client port $remote_user # user name for the HTTP basic authentication service $request # represents the client request address $request_body # client request body: this variable can be used in location to send the request subject through proxy_pass,fastcgi_pass Uwsgi_pass and scgi_pass are passed to the next level proxy server $request_body_file # to save the client request body in a temporary file. After the file processing, this file needs to be deleted. If you need one of them to turn on this feature, you need to set up client_body_in_file_only. If you pass the secondary file to the back-end proxy server, you need to disable request body, that is, set proxy_pass_request_body off,fastcgi_pass_request_body off,uwsgi_pass_request_body off,or scgi_pass_request_body off$request_completion # if the request is successful, the value is "OK", if the request is not completed or the request is not the last part of a scope request If the file path of the current connection request of $request_filename # is empty, the length of the $request_length # request (including the address of the request, the http request header and the request body) generated by the root or alias instruction with the URI request $request_method # HTTP request method, usually the time taken by "GET" or "POST" $request_time # to process the client request, in seconds, precision milliseconds Starts from reading the first byte of the client until the last character is sent to the client for log writing. The variable $request_uri # is equal to the original URI that contains some client request parameters. It cannot be modified. Please see $uri to change or rewrite the URI without the hostname, for example: "/ cnphp/test.php?arg=freemouse" $scheme # request Web protocol, "http" or "https" $server_addr # server-side address It should be noted that in order to avoid accessing the linux system kernel, the ip address should be set in advance in the configuration file $server_name # server name $server_port # server port $server_protocol # server's HTTP version Usually "HTTP/1.0" or "HTTP/1.1" $status # HTTP response code $time_iso8601 # ISO 8610 format of server time $time_local # server time (LOG Format format) $cookie_NAME # client requests the cookie variable in the header, prefixed with "$cookie_" plus a variable with the cookie name The value of this variable is the value of the cookie name $http_NAME # matches any request header field The second half of the NAME in the variable name can be replaced with any request header field. For example, if you need to obtain the http request header in the configuration file: "Accept-Language", $http_accept_language can be used for $http_cookie$http_host # request address. That is, the address (IP or domain name) you enter in the browser $http_referer # url jump source, which is used to record the $http_user_agent # user terminal browser and other information accessed from that page link. $http_x_forwarded_for # when there is a proxy server at the current end, set the web node to record the configuration of the client address. This parameter takes effect only if the proxy server also makes relevant http_x_forwarded_for settings $sent_http_NAME # you can set any http response header field The second half of the NAME in the variable name can be replaced with any response header field. If you need to set the response header Content-length,$sent_http_content_length, you can

3.2 Logstash capture fields:

3.2.1 input components

Input {beats {port = > 5044}}

3.2.2 output components

Output {elasticsearch {hosts = > ["http://localhost:9200"] index = >" beat-mylog "# user = >" elastic "# password = >" changeme "} stdout {codec = > rubydebug}

3.2.3 filter components

Official document: https://www.elastic.co/guide/en/logstash/6.4/filter-plugins.html

Full specifications:

Filter {# regular capture Field grok {} # Field Type conversion mutate {} # time data assignment to another field date {} # Delete data drop {}}

3.2.3.1. Grok matching log field:

A. Create a new configuration logstash.conf for analyzing nginx logs

Filter {grok {patterns_dir = > ["/ tools/install/logstash-6.4.3/patterns"] match = > {"message" = > "% {nginx_log}"} remove_field = > ["message"]} date {match = > ["logtime" "dd/MMM/yyyy:HH:mm:ss Z"] target = > "@ timestamp"}}

B. Capture method name regular expression

* the rules are as follows:

Nginx_log% {IPORHOST:clientip} -% {USER:user}\ [% {HTTPDATE:logtime}\] "(?:% {WORD:method}% {URIPATH:requestURI} (% {URIPARAM:parms} |) (?: HTTP/% {NUMBER:httpversion})?% {DATA:rawrequest})"% {NUMBER:status}% {NUMBER:bytes} (% {URIPARAM:parms} | -)% {QS:referrer}% {QS:agent} "( % {IPORHOST:forwordip} | -) "% {NUMBER:reqtime}

Grok regular capture:

Https://github.com/elastic/logstash/blob/v1.4.2/patterns/grok-patterns

Kinbana tests whether the regular capture is correct:

3.2.3.2, Mutate field conversion

Mutate converter documentation:

Https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-convert

Https://blog.csdn.net/cromma/article/details/52919742

Mutate function: for each output field for subsequent conversion operations

Convert:

Converts the value of a field to another type, such as converting a string to an integer. If the field value is an array, all members are converted. If the field is a hash, no action is taken

Mutate {convert = > {"bytes" = > "integer"reqtime" = > "float"}}

Lowercase & uppercase:

Case conversion, array types, field names as data elements.

Mutate {lowercase = > ["fieldname"]}

Join: the split string of the array

Use fixed join symbols to join elements in an array and do nothing if the given field is not an array type.

Mutate {split = > ["message", "|]} mutate {join = > [" message ","]} # before join processing "message" = > "1 | 2 | 3 | 4"

# after join processing, "message" = > "1Jing 2Jing 3Jing 4"

Merge: concatenating strings

String + string

Array + string

Operation logic: concatenate the characters of the appended field to the characters of the target field

Mutate {merge = > {"Target Field" = > "append Field"}}

Split: split characters into arrays

Use delimiters to split fields into arrays. Valid only for string fields.

Mutate {split = > {"fieldname" = > ","}}

# before split processing "message" = > "1 | 2 | 3 | 4"

# after split processing, "message" = > [[0] "1", [1] "2", [2] "3", [3] "4"]

Gsub: replacement string

Array type, no default setting.

This parameter setting is only for the string type, and if it is not of the string type, nothing is done.

Mutate {gsub = > ["agent", "\", "", "referrer", "\", ""]}

Strip:

Remove white space from the field. Note: this applies only to spaces at the beginning and end.

Mutate {strip = > ["field1", "field2"]}

3.2.3.3, IP library:

Geoip {source = > "clientip" target = > "geoip"}

Obtained IP data:

Lat: dimension lon: longitude

"geoip" = > {"country_code3" = > "CN", "latitude" = > 39.9289, "region_name" = > "Beijing", "location" = > {"lat" = > 39.9289, "lon" = > 116.3883}

3.2.3.4, Drop delete filter:

Description: url contains pictures, css style, js and other files do not enter into the log statistics

If ([requestURI] = ~ "jpg | png | gif | css | js | ico") {drop {}}

3.2.3.5, start with the new configuration file:

Nohup. / bin/logstash-f. / config/logstash.conf &

3.3 kibana interface configuration:

Interface configuration:

Https://www.elastic.co/guide/cn/kibana/current/tutorial-visualizing.html

1. Website PV statistics:

Search statistical criteria:

Configure the search file beat*:

two。 Time period access byte summary

Configure Pie drawing

Statistical methods: statistical quantity, aggregate total, ranking statistics, etc.

Related fields of statistics: you can count multiple fields, and then constrain "bytes" on the basis of "time".

3. Time-consuming page statistics

Configuration bar chart

Y-axis time summary:

X-axis statistics by address:

Then split it by time:

3.4 kibana Statistics Summary Panel:

Online

Customer service online

Customer Service

Summary of questions:

1. Error content: "No cached mapping for this field, refresh your mapping from the Settings > Indices page"

Solution:

Because Elasticsearch has changed the field type, the field type mapped before Kibana is different from that after the index change.

2. Alarm yellow light:

PUT aptech-test

{

"settings": {"number_of_shards": 5, "number_of_replicas": 0}

}

This is due to the setting of multiple copies of the copy number _ of_replicas. It is currently a stand-alone machine and can only be 0 copies.

3.4 deploy a static website (for testing):

3.4.1 install nginx

Yum install nginx

3.4.2 copy a static website

* put the test.zip static website on / tools/apps

* Springboard copy: scp-P 22. / cpts_1406_cdz.zip root@IP:/tools/apps

* unzip decompresses test.zip

3.4.3 nginx configuration

Worker_processes 1 leads events {worker_connections 1024;} http {include mime.types; default_type application/octet-stream

Log_format main'$remote_addr-$remote_user [$time_local] "$request"'$status $body_bytes_sent $request_body "$http_referer"'"$http_user_agent"$http_x_forwarded_for" $request_time'

Sendfile on; keepalive_timeout 65; server {listen 80; server_name localhost; access_log / data/logs/nginx.access.log main; location / {root / tools/apps/; index index.html index.htm;} error_page 404 / 404.html Error_page 500502 503 504 / 50x.htl; location = / 50x.html {root html;}}

3.4.4 start nginx

Start: nginx-c / etc/nginx/nginx.conf stop: nginx-s stop

Public network access: http:// public network IP:80

= other related =

Start elasticsearch:./bin/elasticsearch-d

Intranet: http:// LAN ip:9200/?pretty

Public network: none

Start kibana:nohup. / bin/kibana &

Intranet: http:// LAN ip:5601

Public network: http:// public network ip:5601

Start logstash: nohup. / bin/logstash-f. / config/logstash.conf &

Intranet: http:// LAN ip:9600

Public network: none

Logstash receives data

TCP listening port: 5044

Verification: telnet private network ip 5044

Start filebeat: nohup. / filebeat-c. / filebeat.yml &

Private network: none

Public network: none

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report