Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Kafka-2.11 Learning Notes (2) introduction of Shell script

2025-01-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Lu Chunli's work notes, who said that programmers should not have literary style?

The main shell scripts for Kafka are

[hadoop@nnode kafka0.8.2.1] $ll Total 80-rwxr-xr-x 1 hadoop hadoop 943 2015-02-27 kafka-console-consumer.sh-rwxr-xr-x 1 hadoop hadoop 942 2015-02-27 kafka-console-producer.sh-rwxr-xr-x 1 hadoop hadoop 870 2015-02-27 kafka-consumer-offset-checker.sh-rwxr-xr-x 1 hadoop hadoop 946 2015-02-27 kafka-consumer-perf-test.sh-rwxr-xr-x 1 hadoop hadoop 860 2015 -02-27 kafka-mirror-maker.sh-rwxr-xr-x 1 hadoop hadoop 884 2015-02-27 kafka-preferred-replica-election.sh-rwxr-xr-x 1 hadoop hadoop 2015-02-27 kafka-producer-perf-test.sh-rwxr-xr-x 1 hadoop hadoop 872 2015-02-27 kafka-reassign-partitions.sh-rwxr-xr-x 1 hadoop hadoop 866 2015-02-27 kafka-replay-log-producer.sh-rwxr-xr-x 1 hadoop hadoop 872 2015-02-27 Kafka-replica-verification.sh-rwxr-xr-x 1 hadoop hadoop 4185 2015-02-27 kafka-run-class.sh-rwxr-xr-x 1 hadoop hadoop 1333 2015-02-27 kafka-server-start.sh-rwxr-xr-x 1 hadoop hadoop 2015-02-27 kafka-server-stop.sh-rwxr-xr-x 1 hadoop hadoop 868 2015-02-27 kafka-simple-consumer-shell.sh-rwxr-xr-x 1 hadoop hadoop 861 2015-02-27 kafka-topics.shdrwxr-xr- X 2 hadoop hadoop 4096 2015-02-27 windows-rwxr-xr-x 1 hadoop hadoop 1370 2015-02-27 zookeeper-server-start.sh-rwxr-xr-x 1 hadoop hadoop 2015-02-27 zookeeper-server-stop.sh-rwxr-xr-x 1 hadoop hadoop 968 2015-02-27 zookeeper-shell.sh [hadoop@nnode kafka0.8.2.1] $

Description: Kafka also provides bat scripts that run under windows, in the bin/windows directory.

ZooKeeper script

Each component of Kafka depends on the ZooKeeper environment, so you need to have a ZooKeeper environment before using Kafka; you can configure a ZooKeeper cluster, or you can use the ZooKeeper script integrated by Kafka to start a ZooKeeper node of standalone mode.

# launch Zookeeper Server [hadoop@nnode kafka0.8.2.1] $bin/zookeeper-server-start.sh USAGE: bin/zookeeper-server-start.sh zookeeper.properties# configuration file path is config/zookeeper.properties Mainly configure the local storage path of zookeeper (dataDir) # internal implementation is to call exec $base_dir/kafka-run-class.sh $EXTRA_ARGS org.apache.zookeeper.server.quorum.QuorumPeerMain $@ # stop ZooKeeper Server [hadoop@nnode kafka0.8.2.1] $bin/zookeeper-server-stop.sh # internal implementation is to call ps ax | grep-I 'zookeeper' | grep-v grep | awk' {print $1}'| xargs kill-SIGINT# setting The server parameter [hadoop@nnode kafka0.8.2.1] $zookeeper-shell.shUSAGE: bin/zookeeper-shell.sh zookeeper_host: Port [/ path] [args...] # is internally implemented to call exec $(dirname $0) / kafka-run-class.sh org.apache.zookeeper.ZooKeeperMain-server "$@" # zookeeper shell to view the node information of zookeeper [hadoop@nnode kafka0.8.2.1] $bin/zookeeper-shell.sh nnode:2181 Dnode1:2181,dnode2:2181/Connecting to nnode:2181,dnode1:2181,dnode2:2181/Welcome to ZooKeeper!JLine support is disabledWATCHER::WatchedEvent state:SyncConnected type:None path:nullls / [hbase, hadoop-ha, admin, zookeeper, consumers, config, zk-book, brokers, controller_epoch]

Description: $@ indicates a list of all parameters. $# the number of parameters added to Shell.

Kafka start and stop

# start Kafka Server [hadoop@nnode kafka0.8.2.1] $bin/kafka-server-start.sh USAGE: bin/kafka-server-start.sh [- daemon] server.properties# internal implementation to call exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka $@ # slightly [hadoop@nnode kafka0.8.2.1] $bin/kafka-run-class.sh # stop Kafka Server [hadoop@nnode kafka0.8.2.1] $kafka -server-stop.sh# internal implementation is to call ps ax | grep-I 'kafka\ .Kafka' | grep java | grep-v grep | awk'{print $1}'| xargs kill-SIGTERM

Note: when Kafka starts, it reads the configuration information from config/server.properties. The three core configuration items started by Kafka Server are:

Broker.id: unique identifier of broker, which is a non-negative integer (you can take the last group of ip) port: server listens on the port of client connection (default is 9092) zookeeper.connect: ZK connection information in the format of hostname1:port1 [, hostname2:port2,hostname3:port3] # optional log.dirs: path to Kafka data storage (default is / tmp/kafka-logs), a list of one or more directories separated by commas. When a new partition is created, which directory has the least number of partition at this time, the newly created partition will be placed in that directory. Num.partitions: the number of partition of Topic (default is 1). You can specify # other reference http://kafka.apache.org/documentation.html#configuration when creating Topic

Kafka message

# message producer [hadoop@nnode kafka0.8.2.1] $bin/kafka-console-producer.shRead data from standard input and publish it to Kafka. # read data from the console Option Description--topic REQUIRED: The broker list string in the form HOST1:PORT1,HOST2:PORT2.--broker-list REQUIRED: The topic id to produce messages to. # these two are required parameters Other optional parameters can be viewed by executing the command directly to help # message consumer [hadoop@nnode kafka0.8.2.1] $bin/kafka-console-consumer.shThe console consumer is a tool that reads data from Kafka and outputs it to standard output.Option Description--zookeeper REQUIRED: The connection string for the zookeeper connection In the form host:port. (Multiple URLS can be given to allow fail-over.)-topic The topic id to consume on. -- from-beginning If the consumer does not already have an established offset to consume from, start with the earliest message present in the log rather than the latest message. # zookeeper parameter is required, other parameters are optional Specific reference help information # View message information [hadoop@nnode kafka0.8.2.1] $bin/kafka-topics.shCreate, delete, describe, or change a topic.Option Description--zookeeper REQUIRED: The connection string for the zookeeper connection In the form host:port. (Multiple URLS can be given to allow fail-over.)-create Create a new topic. -- delete Delete a topic-- alter Alter the configuration for the topic.--list List all available topics.--describe List details for the given topics.--topic The topic to be create, alter or describe. Can also accept a regular expression except for-create option. -- help Print usage information. # zookeeper parameter is required, other parameters are optional, refer to specific help information

The rest of the script is briefly

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report