In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
Today, I'll show you how to quickly deploy and experience real-time data stream computing. The content of the article is good. Now I would like to share it with you. Friends who feel in need can understand it. I hope it will be helpful to you. Let's read it along with the editor's ideas.
I. Environmental description
Note: all in one is a stand-alone version of dbus environment, which provides users with a quick experience of dbus. It is only a simple experience version and cannot be used in other environments or uses, including the following:
1) basic construction:
Zookeeper 3.4.6
Kafka 0.10.0.0
Storm 1.0.1
Granfana 4.2.0
Logstash 5.6.1
Influxdb (need to be installed separately, refer to step 3 below)
Mysql (need to be installed separately, refer to step 2 below)
2) dbus related packages:
Dbus-keeper 0.5.0
Dbus-stream-main 0.5.0
Dbus-router 0.5.0
Dbus-heartbeat 0.5.0
Dbus-log-processor 0.5.0
3) required for mysql data sources:
Canal
1.1 Environmental dependence
The recommended configuration for installing the dbus-allinone environment is as follows
JDK 1.8.181 or above
CPU 2 core or above
Memory 16GB or above
Disk 20GB or above
Note: a Linux centos server, preferably an empty machine, do not install zk,kafka,storm, which dbus depends on, etc.
1.2 modify the domain name
Note: ip is your specific ip. Use 192.168.0.1 as an example.
Modify the server / etc/hosts file to set the corresponding domain name information as follows:
192.168.0.1 dbus-n1
Modify the server hostname command as follows:
Hostname dbus-n1
After configuration, the server IP and domain name information are as follows:
1.3Create app users and configure SSH secret-free login
Because ssh invokes storm commands in dbus startup topology, and ssh calls app users and port 22 are used by default in all in one package, it is necessary to create app account and configure ssh secret-free login and secret-free login from dbus-n1 to dbus-n1 in order to experience all in one normally.
After the configuration is completed, execute the following command to see when the configuration is successful.
[app@dbus-n1 ~] $ssh-p 22 app@dbus-n1Last login: Fri Aug 10 15:54:45 2018 from 10.10.169.53 [app@dbus-n1 ~] $2. Preparation 2.1 installation Mysql2.1.1 download
It is recommended to download Mysql version: 5.7.19 address: https://dev.mysql.com/downloads/mysql/
2.1.2 installation
After unpacking the mysql-5.7.19-1.el6.x86_64.rpm-bundle.tar package, execute the following command to install:
Rpm-ivh mysql-community-server-5.7.19-1.el6.x86_64.rpm-nodepsrpm-ivh mysql-community-client-5.7.19-1.el6.x86_64.rpm-nodepsrpm-ivh mysql-community-libs-5.7.19-1.el6.x86_64.rpm-nodepsrpm-ivh mysql-community-common-5.7.19-1.el6.x86_64.rpm-nodepsrpm-ivh mysql-community-libs-compat-5.7.19 -1.el6.x86_64.rpm-- nodeps2.1.3 configuration
In the / etc/my.cnf configuration file, only add bin-log-related configuration, other do not need to modify, pay attention to the following Chinese character notes section
[mysqld] # dbus related configuration start log-bin=mysql-binbinlog-format=ROWserver_id=1# dbus related configuration end 2.1.4 start
Execute the following command to start mysql:
Service mysqld start2.2. Install InfluxDB2.2.1 download
It is recommended to download InfluxDB version: influxdb-1.1.0.x86_64 address: https://portal.influxdata.com/downloads
2.2.2 installation
Switch to the root user on dbus-n1 and execute the following command in the influxdb-1.1.0.x86_64.rpm directory:
Rpm-ivh influxdb-1.1.0.x86_64.rpm2.2.3 start
Execute the following command on dbus-n1:
Service influxdb start2.2.4 initialization configuration
Execute the following command on dbus-n1:
# Log in to influxinflux# to execute the initialization script create database dbus_stat_dbuse dbus_stat_dbCREATE USER "dbus" WITH PASSWORD'dbushes "123 alter RETENTION POLICY autogen ON dbus_stat_db DURATION 15d 3, install Dbus-allinone package 3.1 download
Provide dbus-allinone.tar.gz package on Baidu online disk. Visit the release page to download the latest package: https://github.com/BriData/DBus/releases
3.2 installation
Upload the downloaded dbus-allinone package to the server / app directory, and it must be in this directory
# if you do not have an app directory, first create an app directory mkdir / appcd / apptar-zxvf dbus-allinone.tar.gz3.3 to initialize the database
Log in to the mysql client as root and execute the following command to initialize the database, creating the dbmgr library as well as users, canal users, dbus libraries, and users, testschema libraries, and users:
Source / app/dbus-allinone/sql/init.sql3.4 startup
Execute start.sh to start all dbus services with one click, and there are many startup items.
Cd / app/dbus-allinone./start.sh
Please wait patiently (it will take about 5 minutes). The correct startup log is as follows:
Start grafana...Grafana started. Pid: 23760=Start zookeeper...zookeeper pid 23818Zookeeper started.=Start kafka...No kafka server to stopkafka pid 24055kafka started.=Start Canal... Canal started.=Start logstash...No logstash to stopnohup: appending output to `nohup.out'logstash pid 24151logstash started.=Start storm nimbus...No storm nimbus to stopStorm nimbus pid 24215Storm nimbus started.=Start storm supervisor...No storm supervisor to stopStorm supervisor pid 24674Storm supervisor started.=Start storm ui...No storm ui to stopStorm ui pid 24939Storm ui started. Ui port: 6672=Stop storm topology.Storm topology stoped.=Start storm topology...Storm topology started.=Start Dbus Heartbeat...No Dbus Heartbeat to stopDbus Heartbeat pid 26854Dbus Heartbeat started.=Start Dbus keeper...=stop==keeper-proxy process not existgateway process not existkeeper-mgr process not existkeeper-service process not existregister-server process not exist=start==register-server started. Pid: 27077keeper-proxy started. Pid: 27172gateway started. Pid: 27267keeper-mgr started. Pid: 27504keeper-service started. Pid: 27645Dbus keeper prot: 6090Dbus keeper started.=3.5 generates a check report to see if it starts properly
Enter the directory / app/dbus-allinone/allinone-auto-check-0.5.0, execute the automatic detection script auto-check.sh, and wait a moment
Cd / app/dbus-allinone/allinone-auto-check-0.5.0./auto-check.sh
A check report of the corresponding time will be produced under the directory / app/dbus-allinone/allinone-auto-check-0.5.0/reports, as shown below
[app@dbus-n1 reports] $tree. └── 20180824111905 └── check_report.txt
Open the check_report.txt file to view the corresponding inspection report, as shown below
(note that the explanation information begins with # and will not be generated in the report)
# the following message indicates that the dbusmgr library is normal: check db&user dbusmgr start:====table t_avro_schema data count: 0table t_data_schema data count: 4table t_data_tables data count: 4table t_dbus_datasource data count: 2table t_ddl_event data count: 0table t_encode_columns data count: 0table t_encode_plugins data count: 1table t_fullpull_history data count: 0table t_meta_version data count: 5table t_plain_log_rule_group data count: 1table tweets contains _ Log_rule_group_version data count: 1table t_plain_log_rule_type data count: 0table t_plain_log_rules data count: 5table t_plain_log_rules_version data count: 5table t_project data count: 1table t_project_encode_hint data count: 1table t_project_resource data count: 1table t_project_sink data count: 1table t_project_topo data count: 1table t_project_topo_table data count: 1table t_project_topo_table_encode_output_columns data count: 1table t_project_topo_table_meta_version data count: 0table t_project_user data count: 1table t_query_rule_group data count: 0table t_sink data count: 1table t_storm_topology data count: 0table t_table_action data count: 0table t_table_meta data count: 7table t_user data count: the following message indicates that the dbus library is normal check db&user dbus start:====table db_heartbeat_monitor data count: 15table test_table data count: 0table db_full_pull _ requests data count: "the following message indicates that the canal user is normal check db&user canal start: = master status File:mysql-bin.000002 Position:12047338table db_heartbeat_monitor data count: 15table test_table data count: 0table db_full_pull_requests data count: "the following message indicates that the testschema library is normal check db&user testschema start: = table test_table data count:" the following message indicates that zk starts normally check base component zookeeper start: = 23818 org.apache.zookeeper.server.quorum.QuorumPeerMain# appears the following message indicates that kafka starts normally check base component kafka start: = 24055 kafka.Kafka# appears the following information indicates that storm nimbus, supervisor, Ui starts normal check base component storm start: = 26500 org.apache.storm.daemon.worker25929 org.apache.storm.daemon.worker27596 org.apache.storm.LogWriter26258 org.apache.storm.LogWriter24215 org.apache.storm.daemon.nimbus27035 org.apache.storm.LogWriter27611 org.apache.storm.daemon.worker26272 org.apache.storm.daemon.worker24674 org.apache.storm.daemon.supervisor24939 org.apache.storm.ui.core26486 org.apache.storm.LogWriter27064 org.apache.storm.daemon.worker25915 org.apache.storm.LogWriter# The following message indicates that influxdb starts normal check base component influxdb start: = influxdb 10265 10 Aug08? 02:28:06 / usr/bin/influxd-pidfile / var/run/influxdb/influxd.pid-config / etc/influxdb/influxdb.confapp 28823 28746 0 11:19 pts/3 00:00:00 / bin/sh-c ps-ef | grep influxdbapp 28827 28823 0 11:19 pts/3 00:00:00 grep influxdb# appears the following message Message indicates that grafana starts normally check base component grafana start: = app 23760 1 0 11:09 pts/3 00:00:00. / grafana-serverapp 28828 28746 0 11:19 pts/3 00:00:00 / bin/sh-c ps-ef | grep grafanaapp 28832 28828 0 11:19 pts/3 00:00:00 grep grafana# appears the following message indicates that the heartbeat heartbeat starts normally check base component heartbeat start: = 26854 com.creditease.dbus.heartbeat.start .start # the following message indicates that logstash starts normal check base component logstash start: = 24151 org.jruby.Main# the following message indicates that canal starts normal check canal start: = zk path [/ DBus/Canal/otter-testdb] exists.24105 com.alibaba.otter.canal.deployer.CanalLauncher# appears the following message indicating dispatcher-appender, Mysql-extractor 、 splitter-puller 、 Router startup normal check topology start: = api: http://dbus-n1:6672/api/v1/topology/summarytopology testlog-log-processor status is ACTIVEtopology testdb-mysql-extractor status is ACTIVEtopology testdb-splitter-puller status is ACTIVEtopology testdb-dispatcher-appender status is ACTIVEtopology tr-router status is ACTIVE# appears the following message from the database-> extractor-dispatcher- > appender line normal check flow line start: = first step insert heart beat success.data arrive at topic: testdbdata arrive at topic: testdb.testschemadata arrive at topic: testdb.testschema.result 4, Verify that the all in one package is installed successfully 4.1 prerequisites for logging in to grafana
The host file needs to be configured on the machine that logs in to grafana through the browser. Skip this premise if it has already been configured.
If the verification machine is a windows system, modify the file C:\ Windows\ System32\ drivers\ etc\ hosts to set the corresponding domain name information as follows:
# 192.168.0.1 replace the server ip address 192.168.0.1 dbus-n1 where the allinone package was deployed
If the verification machine is a linux system, modify the / etc/hosts file to set the corresponding domain name information as follows:
# 192.168.0.1 replace the ip address of the server where the allinone package was deployed: 192.168.0.1 dbus-n14.2 login grafana
Login grafana url address: http://dbus-n1:3000/login
4.3 mysql insert data Authentication # login test user mysql-utestschema-p # testschema account password: J0
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.