In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Source code of this article: GitHub click here | | GitEE click here |
I. detailed explanation of configuration
Scenario description: MySQL datasheets are synchronized with ElasticSearch search engines in full and incremental ways.
1. Download content elasticsearch version 6.3.2logstash version 6.3.2mysql-connector-java-5.1.13.jar2, core configuration path: / usr/local/logstash New configuration directory: sync-config
1). Full text of configuration
/ usr/local/logstash/sync-config/cicadaes.conf
Input {stdin {} jdbc {jdbc_connection_string = > "jdbc:mysql://127.0.0.1:3306/cicada?characterEncoding=utf8" jdbc_user = > "root" jdbc_password = > "root123" jdbc_driver_library = > "/ usr/local/logstash/sync-config/mysql-connector-java-5.1.13.jar" jdbc_driver_class = > "com.mysql.jdbc.Driver" Jdbc_paging_enabled = > "true" jdbc_page_size = > "50000" jdbc_default_timezone = > "Asia/Shanghai" statement_filepath = > "/ usr/local/logstash/sync-config/user_sql.sql" schedule = > "* *" type = > "User" lowercase_column_names = > false record_last_run = > true use_column_ Value = > true tracking_column = > "updateTime" tracking_column_type = > "timestamp" last_run_metadata_path = > "/ usr/local/logstash/sync-config/user_last_time" clean_run = > false} jdbc {jdbc_connection_string = > "jdbc:mysql://127.0.0.1:3306/cicada?characterEncoding=utf8" jdbc_user = > "root" jdbc_ Password = > "root123" jdbc_driver_library = > "/ usr/local/logstash/sync-config/mysql-connector-java-5.1.13.jar" jdbc_driver_class = > "com.mysql.jdbc.Driver" jdbc_paging_enabled = > "true" jdbc_page_size = > "50000" jdbc_default_timezone = > "Asia/Shanghai" statement_filepath = > "/ usr/local/logstash/sync-config" / log_sql.sql "schedule = >" * * "type = >" Log "lowercase_column_names = > false record_last_run = > true use_column_value = > true tracking_column = >" updateTime "tracking_column_type = >" timestamp "last_run_metadata_path = >" / usr/local/logstash/sync-config/log_last_time " Clean_run = > false}} filter {json {source = > "message" remove_field = > ["message"]} output {if [type] = = "User" {elasticsearch {hosts = > ["127.0.0.1 filter 9200"] index = > "cicada_user_search" document_type = > "user_search_index"} } if [type] = = "Log" {elasticsearch {hosts = > ["127.0.0.1 Log 9200"] index = > "cicada_log_search" document_type = > "log_search_index"}
2), SQL file
User_sql.sqlSELECTid,user_name userName,user_phone userPhone,create_time createTime,update_time updateTimeFROM c_userWHERE update_time >: sql_last_valuelog_sql.sqlSELECTid,param_value paramValue,request_ip requestIp,create_time createTime,update_time updateTimeFROM c_logWHERE update_time >: sql_last_value
3) description of configuration parameters
Input parameter
Statement_filepath: read SQL statement location schedule: here configure type: type to be executed every minute, identify whether the lowercase_column_names: field written to ES is converted to lowercase record_last_run: record the last execution time use_column_value: use column value tracking_column: distinguish incremental data according to the updateTime field written to ES tracking_column_type: differentiated field type output parameter hosts: ES service address index: Index name Analogy understanding database name document_type: Type name, analogy understanding table name 3, startup process / usr/local/logstash/bin/logstash-f / usr/local/logstash/sync-config/ cicadaes.conf2, ES client tool
1. Download the software
Kibana-6.3.2-windows-x86_64
2. Modify the configuration
Kibana-6.3.2-windows-x86_64\ config\ kibana.yml
Add configuration:
Elasticsearch.url: "http://127.0.0.1:9200"
3. Double-click to start
Kibana-6.3.2-windows-x86_64\ bin\ kibana.bat
4. Access address
Http://localhost:5601
Source code address GitHub address https://github.com/cicadasmile/linux-system-baseGitEE address https://gitee.com/cicadasmile/linux-system-base
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.