In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the relevant knowledge of "how to build es/kibana/logstash elk in docker-compose". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
First, prepare two centos7 virtual machines
Create a new / home/docker-contains/es and then create a node folder
Create new / data / logs / conf files under node
Create a new elasticsearch.yml file in the conf directory as follows
Cluster.name: elasticsearch-clusternode.name: es01network.bind_host: 0.0.0.0network.publish_host: 192.168.65.135http.port: 9200transport.tcp.port: 9300http.cors.enabled: truehttp.cors.allow-origin: "*" node.master: truenode.data: truediscovery.zen.ping.unicast.hosts: ["192.168.65.135virtual 9300" "192.168.65.136 true~"] discovery.zen.minimum_master_nodes: 1path.logs: / usr/share/elasticsearch/logsxpack.security.audit.enabled: true~
Go back to the es directory:
Create a new docker-compose.yml as follows
Version: '3'services: es01: image: elasticsearch:6.6.1 container_name: es01 restart: always volumes:-/ home/docker_container/es/master/data:/usr/share/elasticsearch/data:rw-/ home/docker_container/es/master/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml-/ home/docker_ Container/es/master/logs:/user/share/elasticsearch/logs:rw-/ home/docker_container/es/master/plugin1:/usr/share/elasticsearch/plugins:rw ports:-"9200 9200"-"9300 VR 9300"
Another machine repeats the above process:
Create a new elasticsearch.yml:
Cluster.name: elasticsearch-clusternode.name: es02network.bind_host: 0.0.0.0network.publish_host: 192.168.65.136http.port: 9200transport.tcp.port: 9300http.cors.enabled: truehttp.cors.allow-origin: "*" node.master: truenode.data: truediscovery.zen.ping.unicast.hosts: ["192.168.65.135virtual 9300" "192.168.65.136 true"] discovery.zen.minimum_master_nodes: 1path.logs: / usr/share/elasticsearch/logsxpack.security.audit.enabled: true
Create a new docker-compose.yml:
Version: '3'services: es02: image: elasticsearch:6.6.1 container_name: es02 restart: always volumes:-/ home/docker-container/es/node1/data:/usr/share/elasticsearch/data:rw-/ home/docker-container/es/node1/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml-/ home/docker- Container/es/node1/logs:/user/share/elasticsearch/logs:rw-/ home/docker-container/es/node1/plugin1:/usr/share/elasticsearch/plugins:rw ports:-"9200 9200"-"9300 VR 9300"
Execute in the docker-compose.yml sibling directory
Docker-compose up-d
Install kibana:
Docker-compose.yml
Version: '3'services: kibana: image: kibana:6.6.1 container_name: kibana volumes:-/ home/docker_container/kibana/conf/kibana.yml:/usr/share/kibana/config/kibana.yml restart: always ports:-5601
Kibana.yml:
Elasticsearch.hosts: ["http://192.168.65.135:9200"]server.host:" 0.0.0.0 "xpack.monitoring.ui.container.elasticsearch.enabled: truei18n.locale: zh-CN~
If Kibana server is not ready yet appears
The first point: the versions of KB and ES are inconsistent (that's what most people say on the Internet)
Solution: adjust the KB and ES versions to a unified version
Second point: there is a problem with the configuration in kibana.yml (by looking at the log, the problem with Error: No Living connections is found)
Solution: change the elasticsearch.url in the configuration file kibana.yml to the correct link. The default is: http://elasticsearch:9200
Change to http:// 's own IP address: 9200
The third point: the browser has not recovered.
Solution: refresh the browser several times (it took me 6 times to get out).
Logstash:
Logstash is installed with non-docker, upload logstash-6.4.3.tar.gz to / home/software directory
Extract the gz package and enter the extracted logstash-6.4.3 directory
Edit: vim config/pipelines.yml (if you need to connect more than one, just append pipeline.id and path.config after it, remember that pipeline.id cannot be repeated)
# Default is path.data/dead_letter_queue## path.dead_letter_queue:- pipeline.id: table1 path.config: "/ home/software/logstash-6.4.3/conf/mysql_1.conf"-pipeline.id: table2 path.config: "/ home/software/logstash-6.4.3/conf/mysql.conf"
Mysql.conf:
Input {jdbc {jdbc_driver_library = > "/ home/software/logstash-6.4.3/sql/mysql-connector-java-5.1.46.jar" jdbc_driver_class = > "com.mysql.jdbc.Driver" jdbc_connection_string = > "jdbc:mysql://ip:3306/test" jdbc_user = > "root" jdbc_password = > "" schedule = > "* *" statement = > "SELECT *" FROM user WHERE update_time > =: sql_last_value "use_column_value = > true tracking_column_type = >" timestamp "tracking_column = >" update_time "last_run_metadata_path = >" syncpoint_table "}} output {elasticsearch {# ES IP address and port hosts = > [" 192.168.65.135true tracking_column_type 9200 " "192.168.65.136 id 9200"] # Index name can be customized index = > "user" # there is an id field in the database to be associated Id document_id = > "% {id}" document_type = > "user"} stdout {# JSON output codec = > json_lines}} in the corresponding type
Mysql_1.conf:
Input {jdbc {jdbc_driver_library = > "/ home/software/logstash-6.4.3/sql/mysql-connector-java-5.1.46.jar" jdbc_driver_class = > "com.mysql.jdbc.Driver" jdbc_connection_string = > "jdbc:mysql://ip:3306/lvz_goods?autoReconnect=true&useUnicode=true&createDatabaseIfNotExist=true&characterEncoding=utf8&useSSL=false&serverTimezone=GMT%2B8" jdbc_user = > "root" jdbc_password = > "schedule = >" * * "statement = >" SELECT * FROM lvz_product WHERE update_time > =: sql_last_value "use_column_value = > true tracking_column_type = >" timestamp "tracking_column = >" update_time "last_run_metadata_path = >" syncpoint_table "}} output {elasticsearch {# ES IP address and port hosts = > [" 192.168.65.135sql_last_value 9200 " "192.168.65.136 id 9200"] # Index name can be customized index = > "goods" # there is an id field in the database to be associated Id document_id = > "% {id}" document_type = > "goods"} stdout {# JSON output codec = > json_lines}} in the corresponding type
Note: the mysql connection package needs to be placed in the corresponding directory; the installation of the three major components is complete.
Es integrates ik and Pinyin Separator 1 download elasticsearch-analysis-ik-6.6.1.zip
Upload it to the node1 and node2 directories on the two machines, decompress and rename it to ik
2 download elasticsearch-analysis-pinyin-6.6.1.zip
Upload it to the node1 and node2 directories on the two machines, decompress and rename it to pinyin
Edit docker-compose file mount:
/ home/docker-container/es/node1/plugin1:/usr/share/elasticsearch/plugins:rw
Restart es and go to the kibana interface:
Execution
Delete the goods index first
In the goods index custom ik and Pinyin Separator, ik_smart_pinyin:
DELETE / goodsPUT / goods {"settings": {"analysis": {"analyzer": {"ik_smart_pinyin": {"type": "custom", "tokenizer": "ik_smart", "filter": ["my_pinyin" "word_delimiter"]}, "ik_max_word_pinyin": {"type": "custom", "tokenizer": "ik_max_word", "filter": ["my_pinyin", "word_delimiter"]} "filter": {"my_pinyin": {"type": "pinyin", "keep_separate_first_letter": true, "keep_full_pinyin": true, "keep_original": true "limit_first_letter_length": 16, "lowercase": true, "remove_duplicated_term": true}}
Specifies that the goods index is ik_smart_pinyin
POST / goods/_mapping/goods {"goods": {"properties": {"@ timestamp": {"type": "date"}, "@ version": {"type": "text" "fields": {"keyword": {"type": "keyword", "ignore_above": 256}, "attribute_list": {"type": "text" "fields": {"keyword": {"type": "keyword", "ignore_above": 256}, "category_id": {"type": "long"} "created_time": {"type": "date"}, "detail": {"type": "text", "analyzer": "ik_smart_pinyin", "search_analyzer": "ik_smart_pinyin"} "id": {"type": "long"}, "main_image": {"type": "text", "fields": {"keyword": {"type": "keyword" "ignore_above": 256}, "name": {"type": "text", "analyzer": "ik_smart_pinyin", "search_analyzer": "ik_smart_pinyin"} "revision": {"type": "long"}, "status": {"type": "long"}, "sub_images": {"type": "text" "fields": {"keyword": {"type": "keyword", "ignore_above": 256}, "subtitle": {"type": "text", "analyzer": "ik_smart" "search_analyzer": "ik_smart"}, "updated_time": {"type": "date"}}
Install kafka:
Version: '2'services: zookeeper: image: wurstmeister/zookeeper ports:-"2181 ip KAFKA_ZOOKEEPER_CONNECT 2181" kafka: image: wurstmeister/kafka ports:-"9092" # kafka randomly maps port 9092 to the host port environment: KAFKA_ADVERTISED_HOST_NAME: 192.168.65.135 # Native ip KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA _ CREATE_TOPICS: test:1:1 volumes:-/ var/run/docker.sock:/var/run/docker.sock
Start the two node kafka:
Docker-compose up-d-- scale kafka=2 Native start a Kafka cluster with two nodes
Local project integration, kafka and elk
Application.yml
# Service startup port number server: port: 8500 subscription # Service name (service registered to eureka name) eureka: client: service-url: defaultZone: http://localhost:8100/eurekaspring: application: name: app-lvz-goods redis: host: 192.168.65.136 port: 6379 password: feilvzhang pool: max-idle: 100 min-idle: 1 max-active: 1000 Max-wait:-1 # Database related connection datasource: username: root password: driver-class-name: com.mysql.jdbc.Driver url: jdbc:mysql://192.168.125.113:3306/lvz_goods?autoReconnect=true&useUnicode=true&createDatabaseIfNotExist=true&characterEncoding=utf8&useSSL=false&serverTimezone=GMT%2B8 data: elasticsearch: # Cluster name cluster-name: elasticsearch-cluster # address cluster -nodes: 192.168.65.135nodes 9300192.168.65.136 kafka 9300: bootstrap-servers: 192.168.65.135nodes 32768192.168.65.135purl 32769
Log section:
Package com.lvz.shop.elk.aop;import com.alibaba.fastjson.JSONObject;import com.lvz.shop.elk.kafka.KafkaSender;import org.aspectj.lang.JoinPoint;import org.aspectj.lang.annotation.AfterReturning;import org.aspectj.lang.annotation.Aspect;import org.aspectj.lang.annotation.Before;import org.aspectj.lang.annotation.Pointcut;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.stereotype.Component;import org.springframework.web.context.request.RequestContextHolder Import org.springframework.web.context.request.ServletRequestAttributes;import javax.servlet.http.HttpServletRequest;import java.text.SimpleDateFormat;import java.util.Arrays;import java.util.Date;/** * @ description: ELK intercepts log information * @ author: flz * @ date: 15:57 on 2019-8-9 * / @ Aspect@Componentpublic class AopLogAspect {@ Autowired private KafkaSender kafkaSender / / declare that a pointcut contains the execution expression @ Pointcut ("execution (* com.lvz.shop.*.impl.*.* (..)")) Private void serviceAspect () {} / / request print content @ Before (value = "serviceAspect ()") public void methodBefore (JoinPoint joinPoint) {ServletRequestAttributes requestAttributes = (ServletRequestAttributes) RequestContextHolder .getRequestAttributes (); HttpServletRequest request = requestAttributes.getRequest (); / / print request content / / log.info ("= request content =") / / log.info ("request address:" + request.getRequestURL (). ToString ()); / / log.info ("request method:" + request.getMethod ()); / / log.info ("request class method:" + joinPoint.getSignature ()); / / log.info ("request class method parameter:" + Arrays.toString (joinPoint.getArgs () / / log.info ("= request content ="); JSONObject jsonObject = new JSONObject (); SimpleDateFormat df = new SimpleDateFormat ("yyyy-MM-dd HH:mm:ss"); / / set date format / / request time jsonObject.put ("request_time", df.format (new Date () / / request URL jsonObject.put ("request_url", request.getRequestURL (). ToString ()); / request method jsonObject.put ("request_method", request.getMethod ()); / / request class method jsonObject.put ("signature", joinPoint.getSignature ()); / / request parameter jsonObject.put ("request_args", Arrays.toString (joinPoint.getArgs () JSONObject requestJsonObject = new JSONObject (); requestJsonObject.put ("request", jsonObject); kafkaSender.send (requestJsonObject) } / / print the return content @ AfterReturning (returning = "o", pointcut = "serviceAspect ()") public void methodAfterReturing (Object o) {/ / log.info ("- return content -"); / / log.info ("Response content:" + gson.toJson (o)) after method execution is finished / / log.info ("- return content -"); JSONObject respJSONObject = new JSONObject (); JSONObject jsonObject = new JSONObject (); SimpleDateFormat df = new SimpleDateFormat ("yyyy-MM-dd HH:mm:ss"); / / set date format jsonObject.put ("response_time", df.format (new Date () JsonObject.put ("response_content", JSONObject.toJSONString (o)); respJSONObject.put ("response", jsonObject); kafkaSender.send (respJSONObject);}}
Kafka message delivery:
Package com.lvz.shop.elk.kafka;import com.alibaba.fastjson.JSON;import lombok.extern.slf4j.Slf4j;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.kafka.core.KafkaTemplate;import org.springframework.kafka.support.SendResult;import org.springframework.stereotype.Component;import org.springframework.util.concurrent.ListenableFuture;import org.springframework.util.concurrent.ListenableFutureCallback / * * @ description: producer * @ author: flz * @ date: 15:59 on 2019-8-9 * / @ Component@Slf4jpublic class KafkaSender {@ Autowired private KafkaTemplate kafkaTemplate; / * kafka send message * * @ param obj message object * / public void send (T obj) {String jsonObj = JSON.toJSONString (obj) Log.info ("- message = {}", jsonObj); / / send message ListenableFuture future = kafkaTemplate.send ("goods_mylog", jsonObj); future.addCallback (new ListenableFutureCallback () {@ Override public void onFailure (Throwable throwable) {log.info ("Produce: The message failed to be sent:" + throwable.getMessage () } @ Override public void onSuccess (SendResult stringObjectSendResult) {/ / TODO business processing log.info ("Produce: The message was sent successfully:"); log.info ("Produce: _ + result:" + stringObjectSendResult.toString ();}});}}
Exception section log:
Package com.lvz.shop.elk.aop.error;import com.alibaba.fastjson.JSONObject;import com.lvz.shop.elk.kafka.KafkaSender;import lombok.extern.slf4j.Slf4j;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.web.bind.annotation.ControllerAdvice;import org.springframework.web.bind.annotation.ExceptionHandler;import org.springframework.web.bind.annotation.ResponseBody;import java.text.SimpleDateFormat;import java.util.Date / * * @ description: global exception handling * @ author: flz * @ date: 15:56 on 2019-8-9 * / / @ ControllerAdvice@Slf4jpublic class GlobalExceptionHandler {@ Autowired private KafkaSender kafkaSender; @ ExceptionHandler (RuntimeException.class) @ ResponseBody public JSONObject exceptionHandler (Exception e) {log.info ("# Global catch exception #, error: {}", e); / / 1. Encapsulates exception log information JSONObject errorJson = new JSONObject (); JSONObject logJson = new JSONObject (); SimpleDateFormat df = new SimpleDateFormat ("yyyy-MM-dd HH:mm:ss"); / / sets the date format logJson.put ("request_time", df.format (new Date ()); logJson.put ("error_info", e); errorJson.put ("request_error", logJson) KafkaSender.send (errorJson); / / 2. Return error message JSONObject result = new JSONObject (); result.put ("code", 500); result.put ("msg", "system error"); return result;}}
Logstash configure kafka and es:
Goods_mylog.conf:
Input {kafka {bootstrap_servers = > ["192.168.65.135 topics 32768"] topics = > ["goods_mylog"]}} output {stdout {codec = > rubydebug} elasticsearch {hosts = > ["192.168.65.135topics"] index = > "goods_mylog"}}
Mylog.conf:
Input {kafka {bootstrap_servers = > ["192.168.65.135 topics 32768"] topics = > ["my_log"]}} output {stdout {codec = > rubydebug} elasticsearch {hosts = > ["192.168.65.135topics"] index = > "my_log"}}
When you start the project provider elk, you will start to collect logs and view them under kibana:
This is the end of the content of "how to build docker-compose es/kibana/logstash elk". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.