In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you the ubuntu multi-machine version of the storm environment example analysis, I believe that most people do not know much about it, so share this article for your reference, I hope you will learn a lot after reading this article, let's go to know it!
1 reference blog post
Http://my.oschina.net/u/2285247/blog/546608
Http://www.cnblogs.com/kemaswill/archive/2012/10/24/2737833.html
Http://blog.csdn.net/rzhzhz/article/details/7448894
Http://blog.csdn.net/shirdrn/article/details/7183503#t0
2 distributed Storm installation
Distributed Storm installation, based on stand-alone installation. For standalone installation, see (http://my.oschina.net/u/2285247/blog/546608)
The Storm cluster contains a central node Nimbus and multiple slave nodes Supervisor.
2.1 Host name to IP address mapping configuration
There are two key roles in the ZooKeeper cluster: Leader and Follower. All the nodes in the cluster provide services to distributed applications as a whole, and each node in the cluster is connected to each other. Therefore, when configuring the ZooKeeper cluster, the mapping from host to IP address of each node should be configured with the mapping information of other nodes in the cluster.
For example, for the configuration of each node in my ZooKeeper cluster, take mem1 as an example, the content of / etc/hosts is as follows:
192.168.100.206 mem1 192.168.100.207 mem2192.168.100.208 mem32.2 zookeeper cluster
First, we need to build a Zookeeper cluster, which is as follows:
2.2.1 zookeeper download
The differences between the stand-alone version and the stand-alone version are as follows:
Extract it to the corresponding directory of zookeepr cluster machines (usually base machines) (/ usr/local/ here), and the cluster is mem1,mem2,mem3 (corresponding to hosts).
2.2.2 zookeeper configuration
Configure / usr/local/zookeeper/conf/zoo.cfg on the zookeepr cluster machine (this file is not available by default and can be renamed by zoo_sample.cfg), as follows:
# The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial# synchronization phase can takeinitLimit=10# The number of ticks that can pass between# sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.dataDir=/usr/local/zookeeper/zookeeperdir/zookeeper-datadataLogDir=/usr/local/zookeeper/zookeeperdir/logs# the port at which the clients will connectclientPort=2181server.1=mem1:2888:3888server.2=mem2:2888:3888server.3=mem3:2888:3888
Note: dataDir is the data directory of zookeeper and needs to be created manually; dataLogDir also needs to be created manually
two。 Configure the myid file, create a new myid file in the dataDir directory and set the id number (the id number is directly written to the myid file)
The id number is server. The latter number, such as server.1=mem1:2888:3888, means that the myid number in the mem1 machine is 1century server.2the myid number in the Mem2 device is 2888, which means that the Mem2 device has the number 2.
2.2.3 zookeeper Test
Start
Execute the following command on the zookeepr cluster:
Cd / usr/local/zookeeper/bin/zkServer.sh start
Note: an error will be reported at the beginning of startup, indicating that you cannot connect with other members of the cluster. This is the normal message, and when all the cluster members are started, the error message will disappear.
two。 View cluster status
/ usr/local/zookeeper/bin/zkServer.sh status
If you start normally, you will be prompted with the following message:
[root@mem1 zookeeper] $bin/zkServer.sh status
JMX enabled by default
Using config: / usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower
If an error is reported, the following information is reported:
JMX enabled by default
Using config: / usr/local/zookeeper/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
Execution
Root@mem1:/usr/local/zookeeper/bin# tail-f zookeeper.out View Log
Solution:
Sudo vi / etc/hosts
Comment out: 127.0.0.1 start the line, restart zookeeper to get it!
Or jps to see if a QuorumPeerMain service is available, and if so, it has already been started.
2.3 install dependent software and Storm
We install Python,ZeroMQ,jzmq,Storm on each node of Storm (the installation process is the same as a stand-alone machine).
2.4 configure Storm clusters
Configure Storm for each node (zookeeper node is not required. According to previous experience, each line should start with a space, followed by a space after the colon! ):
Configure the conf/storm.yaml file as follows:
Nimbus.host: "nimbus" storm.local.dir: "/ usr/local/apache-storm-0.10.0/local/dir" storm.zookeeper.servers:-"mem1"-"mem2"-"mem3" storm.zookeeper.port: 2181
Note:
Where nimbus.host is the hostname or IP address of the Nimbus node.
Storm.local.dir is a directory that stores relevant information, such as jar,topology, and needs to be created manually.
Storm.zookeeper.servers is the hostname or IP address of the zookeeper cluster.
Storm.zookeeper.port is the port number of the Zookeeper service, which should be the same as that of your Zookeeper service (2181 is the default).
The above configuration needs to be configured on each node, including Nimbus and Supervisor, and the first three items are required. If the second item is not set, a Connection Refused error will be reported.
~ / .storm / storm.yaml configuration on the Nimbus node (create it without this file):
Nimbus.host: "nimbus"
At this point, the Storm cluster is configured.
2.5 start the Storm cluster
Start the Nimbus service on the Nimbus node:
Cd / usr/local/apache-storm-0.10.0bin/storm nimbus &
Add & to run in the background, otherwise the current terminal will no longer be able to enter commands
Start UI on the Nimbus node so that you can observe the operation of the entire Storm cluster and Topology on the http://nimbus-host:8080 through a browser
Bin/storm ui &
Start Supervisor on the Supervisor node:
Bin/storm supervisor &
This starts the entire Storm cluster.
2.6 Storm cluster testing
The test of the Storm cluster can still be performed on the Nimbus node using the StormStarter.jar that we used before on the stand-alone version:
Storm jar StormStarter.jar storm.starter.ExclamationTopology exclamation
Note that unlike the stand-alone Storm, we add a string exclamation after the main class ExclamationTopology, which is arbitrary and is used to mark the Topology in the cluster. If this string is not added, the Topology will run in a stand-alone environment.
The above is all the contents of the article "example Analysis of ubuntu Multi-machine Edition Building storm Environment". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.