In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "the building method of kafka stand-alone environment". In the daily operation, I believe many people have doubts about the construction method of kafka stand-alone environment. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "how to build kafka stand-alone environment". Next, please follow the editor to study!
1. Start and install zookeeper
[root@node1 bin] #. / zkServer.sh
JMX enabled by default
Using config: / opt/bigdata/zookeeper/bin/../conf/zoo.cfg
Usage:. / zkServer.sh {start | start-foreground | stop | restart | status | upgrade | print-cmd}
[root@node1 bin] #. / zkServer.sh start-foreground
JMX enabled by default
Using config: / opt/bigdata/zookeeper/bin/../conf/zoo.cfg
Build the kafka environment:
[root@node1 bin] #. / kafka-server-start.sh.. / config/server.properties
[2016-04-02 04 purl 10purl 29995] INFO KafkaConfig values:
Request.timeout.ms = 30000
Log.roll.hours = 168
Inter.broker.protocol.version = 0.9.0.X
Log.preallocate = false
Security.inter.broker.protocol = PLAINTEXT
Controller.socket.timeout.ms = 30000
Broker.id.generation.enable = true
Ssl.keymanager.algorithm = SunX509
Ssl.key.password = null
Log.cleaner.enable = true
Ssl.provider = null
Num.recovery.threads.per.data.dir = 1
Background.threads = 10
Unclean.leader.election.enable = true
Sasl.kerberos.kinit.cmd = / usr/bin/kinit
Replica.lag.time.max.ms = 10000
Ssl.endpoint.identification.algorithm = null
Auto.create.topics.enable = true
Zookeeper.sync.time.ms = 2000
Ssl.client.auth = none
Ssl.keystore.password = null
Log.cleaner.io.buffer.load.factor = 0.9
Offsets.topic.compression.codec = 0
Log.retention.hours = 168
Log.dirs = / tmp/kafka-logs
Ssl.protocol = TLS
Log.index.size.max.bytes = 10485760
Sasl.kerberos.min.time.before.relogin = 60000
Log.retention.minutes = null
Connections.max.idle.ms = 600000
Ssl.trustmanager.algorithm = PKIX
Offsets.retention.minutes = 1440
Max.connections.per.ip = 2147483647
Replica.fetch.wait.max.ms = 500,
Metrics.num.samples = 2
Port = 9092
Offsets.retention.check.interval.ms = 600000
Log.cleaner.dedupe.buffer.size = 134217728
Log.segment.bytes = 1073741824
Group.min.session.timeout.ms = 6000
Producer.purgatory.purge.interval.requests = 1000
Min.insync.replicas = 1
Ssl.truststore.password = null
Log.flush.scheduler.interval.ms = 9223372036854775807
Socket.receive.buffer.bytes = 102400
Leader.imbalance.per.broker.percentage = 10
Num.io.threads = 8
Zookeeper.connect = localhost:2181
Queued.max.requests = 500,
Offsets.topic.replication.factor = 3
Replica.socket.timeout.ms = 30000
Offsets.topic.segment.bytes = 104857600
Replica.high.watermark.checkpoint.interval.ms = 5000
Broker.id = 0
Ssl.keystore.location = null
Listeners = PLAINTEXT://:9092
Log.flush.interval.messages = 9223372036854775807
Principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
Log.retention.ms = null
Offsets.commit.required.acks =-1
Sasl.kerberos.principal.to.local.rules = [DEFAULT]
Group.max.session.timeout.ms = 30000
Num.replica.fetchers = 1
Advertised.listeners = null
Replica.socket.receive.buffer.bytes = 65536
Delete.topic.enable = false
Log.index.interval.bytes = 4096
Metric.reporters = []
Compression.type = producer
Log.cleanup.policy = delete
Controlled.shutdown.max.retries = 3
Log.cleaner.threads = 1
Quota.window.size.seconds = 1
Zookeeper.connection.timeout.ms = 6000
Offsets.load.buffer.size = 5242880
Zookeeper.session.timeout.ms = 6000
Ssl.cipher.suites = null
Authorizer.class.name =
Sasl.kerberos.ticket.renew.jitter = 0.05,
Sasl.kerberos.service.name = null
Controlled.shutdown.enable = true
Offsets.topic.num.partitions = 50
Quota.window.num = 11
Message.max.bytes = 1000012
Log.cleaner.backoff.ms = 15000
Log.roll.jitter.hours = 0
Log.retention.check.interval.ms = 300000
Replica.fetch.max.bytes = 1048576
Log.cleaner.delete.retention.ms = 86400000
Fetch.purgatory.purge.interval.requests = 1000
Log.cleaner.min.cleanable.ratio = 0.5
Offsets.commit.timeout.ms = 5000
Zookeeper.set.acl = false
Log.retention.bytes =-1
Offset.metadata.max.bytes = 4096
Leader.imbalance.check.interval.seconds = 300
Quota.consumer.default = 9223372036854775807
Log.roll.jitter.ms = null
Reserved.broker.max.id = 1000
Replica.fetch.backoff.ms = 1000
Advertised.host.name = null
Quota.producer.default = 9223372036854775807
Log.cleaner.io.buffer.size = 524288
Controlled.shutdown.retry.backoff.ms = 5000
Log.dir = / tmp/kafka-logs
Log.flush.offset.checkpoint.interval.ms = 60000
Log.segment.delete.delay.ms = 60000
Num.partitions = 1
Num.network.threads = 3
Socket.request.max.bytes = 104857600
Sasl.kerberos.ticket.renew.window.factor = 0.8
Log.roll.ms = null
Ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
Socket.send.buffer.bytes = 102400
Log.flush.interval.ms = null
Ssl.truststore.location = null
Log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
Default.replication.factor = 1
Metrics.sample.window.ms = 30000
Auto.leader.rebalance.enable = true
Host.name =
Ssl.truststore.type = JKS
Advertised.port = null
Max.connections.per.ip.overrides =
Replica.fetch.min.bytes = 1
Ssl.keystore.type = JKS
(kafka.server.KafkaConfig)
3. Create topic
[root@node1 bin] # / kafka-topics.sh-- create-- zookeeper localhost:2181-- replication-factor 1-- partitions 1-- topic test
Created topic "test".
4. List the topic
[root@node1 bin] # / kafka-topics.sh-- list-- zookeeper localhost:2181
Test
5. Production news
[root@node1 bin] # / kafka-console-producer.sh-- broker-list localhost:9092-- topic test
Fhgfhfgh\
Gjgjhgjg
Gjhgjkghk
Nvnbv
6. Consumption message
[root@node1 ~] # cd / opt/bigdata/kafka/bin/
[root@node1 bin] # / kafka-console-consumer.sh-- zookeeper localhost:2181-- topic test-- from-beginning
Fhgfhfgh\
Gjgjhgjg
Gjhgjkghk
Nvnbv
At this point, the study of "the method of building a stand-alone environment for kafka" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.