In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Version upgrade (using the latest version of stable)
Zookeeper-3.4.5-- > > zookeeper-3.4.6
Release log: http://zookeeper.apache.org/doc/r3.4.6/releasenotes.html
Kafka_2.9.2-0.8.1-- > kafka_2.9.2-0.8.2.2
Release log: https://archive.apache.org/dist/kafka/0.8.2.2/RELEASE_NOTES.html
Upgrade instructions: http://kafka.apache.org/documentation.html#upgrade_82
Jstorm-0.9.2-- > > jstorm-2.1.0
Machine planning (using hadoop users, unzipped and installed)
Host
Ipserver
Vm13
10.1.2.208Kafka Magna QuorumPeerMain (zookeeper), Supervisor (Jstorm), NimbusServer (Jstorm) vm710.1.2.197Kafka,QuorumPeerMain (zookeeper), Supervisor (Jstorm) vm810.1.2.198Kafka,QuorumPeerMain (zookeeper), Supervisor (Jstorm)
Machine configuration
CPU:24 core, 64 GB of memory, disk 2T*4
Zookeeper installation
1. Enter the home directory, cd / home/hadoop
two。 Download the zookeeper installation package, http://mirrors.hust.edu.cn/apache/zookeeper/stable/zookeeper-3.4.6.tar.gz
3. Extract the installation package, tar-zxf zookeeper-3.4.6.tar.gz
4. Enter the directory, cd zookeeper-3.4.6
5. Make a copy of conf/zoo_sample.cfg named zoo.cfg and put it in the conf directory to modify the configuration in it.
Vi conf/zoo.cfg
TickTime=2000
InitLimit=10
SyncLimit=5
DataDir=/data0/zookeeper/data
ClientPort=2181
Server.1=10.1.2.208:12888:13888
Server.2=10.1.2.197:12888:13888
Server.3=10.1.2.198:12888:13888
Vi conf/log4j.properties
Zookeeper.root.logger=INFO,ROLLINGFILE
Zookeeper.console.threshold=INFO
Zookeeper.log.dir=/data0/zookeeper/logs
Zookeeper.log.file=zookeeper.log
Zookeeper.log.threshold=INFO
Zookeeper.tracelog.dir=/data0/zookeeper/logs
Zookeeper.tracelog.file=zookeeper_trace.log
Vi bin/zkEnv.sh
If ["x$ {ZOO_LOG_DIR}" = "x"]
Then
ZOO_LOG_DIR= "/ data0/zookeeper/logs"
Fi
If ["x$ {ZOO_LOG4J_PROP}" = "x"]
Then
ZOO_LOG4J_PROP= "INFO,ROLLINGFILE"
Fi
Create a myid under the ${zookeeper.log.dir} directory of each node, with the content serverId
For example, on 10.1.2.208, echo''1 > / data0/zookeeper/data/myid
6. Start the service
Start. / bin/zkServer.sh start on three nodes in turn
7. Verification test
Use the client to enter the zookeeper control console. / bin/zkCli.sh-server dckafka1:12181
Check whether the current service is leader or flower,. / bin/zkServer.sh status
8. Reference documentation
Log4j configuration: http://www.cnblogs.com/zhwbqd/p/3957018.html
Http://stackoverflow.com/questions/26612908/why-does-zookeeper-not-use-my-log4j-properties-file-log-directory
Kafka installation
1. Enter the home directory, cd / home/hadoop
two。 Download the kafka installation package, wget "http://mirrors.hust.edu.cn/apache/kafka/0.8.2.2/kafka_2.9.2-0.8.2.2.tgz"
3. Extract the installation package, tar-zxf kafka_2.9.2-0.8.2.2.tgz
4. Enter the directory, cd kafka_2.9.2-0.8.2.2
5. Configuration modification (please refer to http://kafka.apache.org/documentation.html#topic-config for more configuration)
Vi conf/server.properties
# each broker can be identified by a unique non-negative integer id
Broker.id=0
# Service listening port
Port=9092
# the maximum number of threads used by broker to process network requests, which is generally the number of CPU cores
Num.network.threads=12
# the maximum number of threads for broker to process disk IO, which is twice the number of CPU cores
Num.io.threads=12
# the path where data is stored in kafka. This path is not unique, it can be multiple, and the paths only need to be separated by commas; whenever a new partition is created, it is selected under the path that contains the least partitions
Log.dirs=/data2/kafka/data,/data3/kafka/data
# default number of topic partitions, which can also be specified when creating a topic
Num.partitions=6
# disable automatic creation of topics
Auto.create.topics.enable=false
# allow deletion of topics
Delete.topic.enable=true
# maximum time retained by log (in hours)
Log.retention.hours=72
# the format of zookeeper connection string is: hostname1:port1, hostname2:port2, hostname3:port3
# zooKeeper adds a "chroot" path to store all kafka data in the cluster under a specific path in the format of: hostname1:port1,hostname2:port2,hostname3:port3/chroot/path
# this setting stores all kafka cluster data under the / chroot/path path. Note that before you start broker, you must create this path, and consumers must use the same connection format.
Zookeeper.connect=vm13:2181,vm7:2181,vm8:2181
Vi conf/consumer.properties
# the zookeeper configuration of the consumer link, which should be consistent with the zookeeper.connect save in server.properties
Zookeeper.connect=vm13:2181,vm7:2181,vm8:2181
# generally customize group on the consumer client
Group.id=test.consumer-group
Vi conf/producer.properties
# kafka broker list
Metadata.broker.list=vm13:9092,vm7:9092,vm8:9092
# messages are sent asynchronously to broker
Producer.type=async
# number of messages sent in batch asynchronously
Batch.num.messages=200
Vi conf/log4j.properties
# specify service log directory
Kafka.logs.dir=/data2/kafka/logs
Log4j.rootLogger=INFO, kafkaAppender
Create startup script cat > startKafkaServer.sh
Nohup. / bin/kafka-server-start.sh config/server.properties &
If [$?-eq 0]; then
Echo "Kafka server start success..."
Else
Echo "Kafka server start success..."
Fi
6. Start the service
. / startKafkaServer.sh
7. Verification test
a. Create topic
. / bin/kafka-topics.sh-- zookeeper dckafka1:12181,dckafka2:12181-- create-- topic mytest-- replication-factor 1-- partitions 9
b. View topic list
. / bin/kafka-topics.sh-zookeeper dckafka1:12181,dckafka2:12181-list
c. Create a producer
. / bin/kafka-console-producer.sh-broker-list dckafka1:9092,dckafka2:9092,dckafka3:9092-topic mytest
d. Create consumers
. / bin/kafka-console-consumer.sh-- zookeeper dckafka1:12181,dckafka2:12181,dckafka3:12181-- topic mytest-- from-beginning
Jstorm installation
1. Enter the home directory, cd / home/hadoop
two。 Download the storm package, wget "http://42.121.19.155/jstorm/jstorm-2.1.0.tar.bz2"
(This version is for Alibaba Global Shopping Festival, November 11th 2015)
3. Extract the installation package, tar-jxf jstorm-2.1.0.tar.bz2
4. Change directory, mv deploy/jstorm jstorm-2.1.0; cd jstorm-2.1.0
Jstorm-2.1.0 has added some deployment and management scripts, which need not be ignored here.
5. Configuration modification (see how to install https://github.com/alibaba/jstorm/wiki/)
Vi / .bashrc
Modify or add the following
Export JSTORM_HOME=/home/hadoop/jstorm-2.1.0
Export PATH=$PATH:$JSTORM_HOME/bin
Vi conf/storm.yaml
Modify or add the following
Storm.zookeeper.servers:
-"dckafka1"
-"dckafka2"
-"dckafka3"
Storm.zookeeper.port: 2181
Storm.zookeeper.root: "/ jstorm2"
Storm.local.dir: "/ data1/jstorm/data"
Jstorm.log.dir: "/ data1/jstorm/logs"
Supervisor.slots.ports.base: 6800
Supervisor.slots.port.cpu.weight: 1.0
Supervisor.slots.port.mem.weight: 0.6
6. Start the service
Nohup. / bin/jstorm supervisor & (all machines execute)
Nohup. / bin/jstorm nimbus & (Select a machine as the nimbus node)
7. Install jstorm Web UI
Mkdir / .jstorm
Cp $JSTORM_HOME/conf/storm.yaml ~ / .jstorm
Put jstorm-ui-2.1.0.war in the tomcat container and start
Visit the UI page: http://127.0.0.1:9091
8. Submit topology task (for submitting clients, you need to create ln-s .jstorm / storm.yaml $JSTORM_HOME/conf/storm.yaml under the home directory)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.