In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "the installation and deployment steps of Storm-0.9.3". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn the steps of Storm-0.9.3 installation and deployment.
Storm-0.9.3 installation and deployment steps
There are two types of nodes in Storm cluster: master node (Master Node) and work node (Work Node). The corresponding roles are as follows:
A daemon called Nimbus runs on the master node (Master Node), which is responsible for distributing code within the Storm cluster, assigning tasks to working machines, and monitoring the running status of the cluster. The role of Nimbus is similar to that of JobTracker in Hadoop.
A daemon called Supervisor runs on each worker node (Work Node). The Supervisor is responsible for listening on the tasks assigned to it from the Nimbus, thereby starting or stopping the worker process that executes the task. Each worker process executes a subset of Topology; a running Topology consists of multiple worker processes distributed on different worker nodes.
Storm cluster components
All coordination between the Nimbus and Supervisor nodes is achieved through the Zookeeper cluster. In addition, both Nimbus and Supervisor processes are fail-fast and stateless; all the states of the Storm cluster are either in the Zookeeper cluster or stored on the local disk. This means that you can use kill-9 to kill Nimbus and Supervisor processes, and they can continue to work after a restart. This design makes the Storm cluster incredibly stable.
-
Set up Zookeeper cluster
Storm uses Zookeeper to coordinate the cluster, and since Zookeeper is not used for messaging, the pressure on Zookeeper from Storm is quite low. In most cases, a single-node Zookeeper cluster is adequate, but to ensure failure recovery or to deploy a large-scale Storm cluster, a larger-node Zookeeper cluster may be required (for Zookeeper clusters, the officially recommended minimum number of nodes is 3). Complete the following installation and deployment steps on each machine in the Zookeeper cluster:
1. Download and install Java JDK. The official download link is http://java.sun.com/javase/downloads/index.jsp Jing JDK version JDK 6 or above.
two。 According to the load of the Zookeeper cluster, set the Java heap size reasonably to avoid the occurrence of swap as much as possible, resulting in the decline of Zookeeper performance. To be conservative, machines with 4GB memory can allocate 3GB maximum heap space for Zookeeper.
3. After downloading, extract and install the Zookeeper package. The official download link is http://hadoop.apache.org/zookeeper/releases.html.
4. According to the Zookeeper cluster node, create the Zookeeper configuration file zoo.cfg under the conf directory:
TickTime=2000
DataDir=/var/zookeeper/
ClientPort=2181
InitLimit=5
SyncLimit=2
Server.1=zookeeper1:2888:3888
Server.2=zookeeper2:2888:3888
Server.3=zookeeper3:2888:3888
Copy the code
5. Create a myid file under the dataDir directory that contains only one line and contains the id number in the server.id corresponding to that node. Where dataDir specifies the data file directory of the Zookeeper; where server.id=host:port:port,id is the number of each Zookeeper node and is stored in the myid file under the dataDir directory, zookeeper1~zookeeper3 represents the hostname of each Zookeeper node, the first port is the port used to connect to the leader, and the second port is the port used for leader election.
6. Start the Zookeeper service:
Bin/zkServer.sh start
Copy the code
7. Test the availability of the service through the Zookeeper client:
Bin/zkCli.sh-server 127.0.0.1purl 2181
Copy the code
-
Machine partition
10.134.84.93 Nimbus
10.139.37.57 Supervisor
10.139.18.45 Supervisor zookeeper
10.134.85.125 Supervisor zookeeper
10.134.74.59 Supervisor zookeeper
Version selection
1. Storm chooses to use the latest version of apache-storm-0.9.3 at the download address:
Http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.3/apache-storm-0.9.3.tar.gz
2. Storm depends on jdk6 and python
2.1the machine has been installed with jdk7 and will report an error when starting storm after testing. So choose the latest version of jdk6 6u45 and download it at:
Http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase6-419409.html#jdk-6u45-oth-JPR
Choice
2.2 the machine comes with python2.4.3, but storm depends on version 2.6 or above. We choose to use 2.7.9, download address:
Https://www.python.org/downloads/release/python-279/
Dependent environmental preparation
1. Jdk installation
1.1 since the default jdk7 is already installed on the machine, 1.7 is prompted when the terminal directly types java-version.
Execute jdk-6u45-linux-x64.bin directly, as follows:
1.2 after automatic decompression, a jdk folder will be generated in the current directory, and then mv this folder to the JAVA_HOME address we specified, such as / opt/local/jdk1.6.0_45, as follows:
2. Python installation
2.1 execute the tar zxvf Python-2.7.9.tgz command to extract the Python installation package:
2.3 after. / configure, make, make install, the default python2.7 is installed in / usr/local/bin/python2.7, while the soft chain of / usr/bin/python still refers to 2.4.3. You can replace it as follows:
Storm configuration
1. Extract the storm installation package on the server, and the red part is the configuration file that needs to be modified:
two。 Modify conf/storm_env.ini to specify the java environment to be used
3. Modify conf/storm.yaml to specify the following configurations for strom:
The server domain name of zookeeper used by # storm. Default port 2181 storm.zookeeper.servers:- "yf_18_45"-"sjs_85_125"-"sjs_74_59" # nimbus node nimbus.host: "sjs_84_93"
# data storage path
Storm.local.dir: "/ data/storm"
# Local log path
Storm.log.dir: "/ opt/logs/storm"
Supervisor.slots.ports:
-6700
-6701
-6702
-6703
# specify drpc server
Drpc.servers:
-"yf_18_45"
-"sjs_85_125"
-"sjs_74_59"
-"yf_37_57"
4. Perform the above installation steps on each storm node, where storm can be configured on one machine and then scp to other servers.
Start storm
1. Start nimbus, storm-ui, and logviewer on the nimbus node:
Bin/storm nimbus &
Bin/storm ui &
Bin/storm logviewer &
3. Start supervisor and logviewer on each supervisor node:
Bin/storm supervisor &
Bin/storm logviewer &
Verification
1. Visit http://10.134.84.93:8080 to check whether ui is normal. The number of supervisor is 4 and the number of free slot is 16.
two。 Submit the test storm program.
Bin/storm jar examples/storm-starter/storm-starter-topologies-0.9.3.jar storm.starter.ExclamationTopology ExclamationTopology
At this point, I believe that everyone on the "Storm-0.9.3 installation and deployment steps" have a deeper understanding, might as well to the actual operation of it! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.