Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Installation steps for Storm0.9.4

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains the "Storm0.9.4 installation steps", the article explains the content is simple and clear, easy to learn and understand, the following please follow the editor's ideas slowly in depth, together to study and learn "Storm0.9.4 installation steps" bar!

Environment: three virtual machines, the system is CentOS6.5

1. Turn off the firewall, configure hosts, and add the mapping between hosts and IP in the cluster

[grid@hadoop4 ~] $cat / etc/hosts127.0.0.1 localhost::1 localhost192.168.0.106 hadoop4192.168.0.107 hadoop5192.168.0.108 hadoop6

two。 Install Java (JDK6 or above) and configure JAVA_HOME, CLASSPATH environment variables

[grid@hadoop4 ~] $cat .bash _ profileJAVA_HOME=/usr/java/jdk1.7.0_72JRE_HOME=$JAVA_HOME/jrePATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/binCLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/libexport JAVA_HOME JRE_HOME PATH CLASSPATH

3. Install python

First determine the Python version that comes with your system. If it is 2.6.6 or above, you do not need to install it.

[grid@hadoop4 ~] $pythonPython 2.6.6 (Jan 22 2014, 09:42:36) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2Type "help", "copyright", "credits" or "license" for more information. >

4. Set up Zookeeper cluster

# # download and decompress # # [grid@hadoop4 ~] $wget http://mirrors.cnnic.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz[grid@hadoop4 ~] $tar-zxf zookeeper-3.4.6.tar.gz## modify configuration file # # [grid@hadoop4 ~] $cd zookeeper-3.4.6/conf/ [grid@hadoop4 conf] $cp-p zoo_sample.cfg zoo.cfg [grid@hadoop4 Conf] $vi zoo.cfg# The number of milliseconds of each ticktickTime=2000 # # Server heartbeat time Unit millisecond # The number of ticks that the initial# synchronization phase can takeinitLimit=10 # # initialization time for new leader # The number of ticks that can pass between# sending a request and getting an acknowledgementsyncLimit=5 # # maximum tolerance time for heartbeat detection in leader and follower, response exceeds syncLimit*tickTime,leader 's belief that follwer is dead To remove the follwer# the directory where the snapshot is stored.# do not use / tmp for storage, / tmp here is just# example sakes.dataDir=/home/grid/zookeeper-3.4.6/data # # data directory from the server list, you need to manually create a # dataLogDir= # # log directory without specifying that the port # # server.id=host:port:port,id that listens for client connections will use the same settings as dataDir # the port at which the clients will connectclientPort=2181 # # is a number Indicates which server this is, and this id will also be written to the myid file. Host is the zookeeper server ip or hostname; the first port is the port that leader uses to communicate with follwer The second port is the port server.1=hadoop4:2888:3888server.2=hadoop5:2888:3888server.3=hadoop6:2888:3888# the maximum number of client connections.# increase this if you need to handle more clients#maxClientCnxns=60## Be sure to read the maintenance section of the# administrator guide before turning on autopurge.## http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance## The number of snapshots to retain in dataDir#autopurge.snapRetainCount=3# Purge task interval in hours# Set to used when electing leader. " 0 "to disable autopurge feature#autopurge.purgeInterval=1## manually create a data directory # # [grid@hadoop4 conf] $cd / home/grid/zookeeper-3.4.6 [grid@hadoop4 zookeeper-3.4.6] $mkdir data## Distribution zookeeper## [grid@hadoop4 zookeeper-3.4.6] $cd. [grid@hadoop4 ~] $scp-rp zookeeper-3.4.6 grid@hadoop5:/home/grid/ [grid@hadoop4 ~] $scp-rp zookeeper-3 .4.6 grid@hadoop6:/home/grid/## creates a myid file in the data directory Write id number Used to identify the current host # # [grid@hadoop4 ~] $echo "1" > zookeeper-3.4.6/data/myid [grid@hadoop5 ~] $echo "2" > zookeeper-3.4.6/data/myid [grid@hadoop6 ~] $echo "3" > zookeeper-3.4.6/data/myid## launch zookeeper## [grid@hadoop4 ~] $zookeeper-3.4.6/bin/zkServer.sh start [grid@hadoop5 ~] $zookeeper-3.4. 6/bin/zkServer.sh start [grid@hadoop6 ~] $zookeeper-3.4.6/bin/zkServer.sh start## View zookeeper status # # [grid@hadoop4 ~] $zookeeper-3.4.6/bin/zkServer.sh statusJMX enabled by defaultUsing config: / home/grid/zookeeper-3.4.6/bin/../conf/zoo.cfgMode: follower [grid@hadoop5 ~] $zookeeper-3.4.6/bin/zkServer.sh statusJMX enabled by defaultUsing config: / home/ Grid/zookeeper-3.4.6/bin/../conf/zoo.cfgMode: leader [grid@hadoop6 ~] $zookeeper-3.4.6/bin/zkServer.sh statusJMX enabled by defaultUsing config: / home/grid/zookeeper-3.4.6/bin/../conf/zoo.cfgMode: follower

5. Install Storm

# # download and decompress # # [grid@hadoop4 ~] $wget http://mirrors.cnnic.cn/apache/storm/apache-storm-0.9.4/apache-storm-0.9.4.tar.gz[grid@hadoop4 ~] $tar-zxf apache-storm-0.9.4.tar.gz [grid@hadoop4 ~] $mv apache-storm-0.9.4 storm-0.9.4## modified configuration item # # [grid@hadoop4 conf] $vim storm.yaml # Licensed to the Apache Software Foundation (ASF) under one# or more contributor license agreements. See the NOTICE file# distributed with this work for additional information# regarding copyright ownership. The ASF licenses this file# to you under the Apache License, Version 2.0 (the# "License"); you may not use this file except in compliance# with the License. You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "ASIS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND The Zookeeper cluster address used by the either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# These MUST be filled in for a storm configuration## cluster storm.zookeeper.servers:-"hadoop4"-"hadoop5"-"hadoop6" storm.zookeeper.port: 2181 addresses of Nimbus machines in the cluster nimbus.host: "hadoop4" # # Nimbus and Supervisor processes are used to store a small amount of state For example, local disk directories such as jars, confs, etc., need to be created in advance and given sufficient access rights storm.local.dir: "/ home/grid/storm-0.9.4/data" # # for each Supervisor worker node, you need to configure the number of worker that the worker node can run. Each worker occupies a separate port for receiving messages, and this configuration option is used to define which ports can be used by the worker. By default, four workers can be run on each node, on ports 6700, 6701, 6702, and 6703. Supervisor.slots.ports:-6700-6701-6702-6703 List of custom serializations# topology.kryo.register:# # These may optionally be filled in:# List of custom serializations# topology.kryo.register:#-org.mycompany.MyType#-org.mycompany.MyType2: org.mycompany.MyType2Serializer### List of custom kryo decorators# topology.kryo.decorators:#-org.mycompany.MyDecorator### Locations of the drpc servers# drpc.servers:# -"server1" #-"server2" # # Metrics Consumers# topology.metrics.consumer.register:#-class: "backtype.storm.metric.LoggingMetricsConsumer" # parallelism.hint: "org.mycompany.MyMetricsConsumer" # parallelism.hint: "argument:#-endpoint:" metrics-collector.mycompany.org "# # other configuration items can be found at: https://github. Com/nathanmarz/storm/blob/master/conf/defaults.yaml # create a data directory # # [grid@hadoop4 conf] $cd / home/grid/storm-0.9.4/ [grid@hadoop4 storm-0.9.4] $mkdir data## Distribution Storm## [grid@hadoop4 ~] $scp-rp storm-0.9.4/ grid@hadoop5:/home/grid/ [grid@hadoop4 ~] $scp-rp storm-0.9.4/ grid@hadoop6:/ Home/grid/## edit environment variable # # [grid@hadoop4 ~] $vim .bash _ profileexport STORM_HOME=/home/grid/storm-0.9.4export PATH=$PATH:$STORM_HOME/ [grid @ hadoop4 ~] $source .bash _ profile## launch Storm (make sure zookeeper has been started) # # [grid@hadoop4 ~] $storm nimbus & # # run the Nimbus daemon [grid@hadoop5 ~] $storm supervisor & # # on the worker node Run the Supervisor daemon [grid@hadoop6 ~] $storm supervisor & [grid@hadoop4 ~] $storm ui & # # run the UI program on the master node After startup, you can enter the ip:port of the http:// master node (default port 8080) [grid@hadoop4 ~] $storm logviewer & # # to run the LogViewer program on the master node. After startup, click the corresponding Woker on the UI to view the corresponding work log [grid@hadoop4 ~] $jps2959 QuorumPeerMain3310 logviewer3414 Jps3228 nimbus3289 core [grid@hadoop5 ~] $jps2907 QuorumPeerMain3215 Jps3154 supervisor [grid@hadoop6 ~] $jps3248 Jps2935 QuorumPeerMain3186 supervisor.

Thank you for your reading, the above is the content of "Storm0.9.4 installation steps", after the study of this article, I believe you have a deeper understanding of the installation steps of Storm0.9.4, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report