In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces you how to open source Logi-KafkaManager, the content is very detailed, interested friends can refer to, hope to be helpful to you.
First, debugging environment to build a front-end debugging environment
Github cloning is relatively slow and gitee is very fast. The code contains several modules common,console,core,dao,extends,task,web, including the startup class of MainApplication in web and the other dependencies. The console module is based on the front-end interface of recat+typescript (technical stack selection is still very advanced). Check the source code for the front and back end running locally. Here, console is run separately in VScode.
# react is based on node like vue, so npm related dependencies are introduced and configured to launch
Npm config set registry https://registry.npm.taobao.org
Npm config list # View the current configuration of npm
Npm install
# start the react project
Npm start
The console front-end module starts up and runs:
Because the front and back ends use idea and vscode respectively, the back-end project pom.xml needs to comment out the references to the console front-end modules:
Back-end debugging environment
Relying on Maven 3.5 + (backend packaging), node v12 + (frontend packaging), Java 8 + (required for running environment), MySQL 5.7 (data storage), node is not needed because it is placed in vscode, create a kafka_manager library in mysql, run the sql initialization statement, and modify the mysql configuration in springboot (here the official sql statement is not set with character set, need to be added or an error will be reported)
Mysql-- default-character-set=utf8-uroot-p123456-P3306-D kafka_manager
< create_mysql_table.sql 将web模块的MainApplication.java配置成应用主类即可启动; 2021-01-25 19:33:22.642 INFO 18000 --- [ main] c.x.kafka.manager.web.MainApplication : MainApplication started 由于是本地运行,console模块的API的proxy/target需要修改: proxy: { '/api/v1/': { target: 'http://127.0.0.1:8080', //target: 'http://10.179.37.199:8008', // target: 'http://99.11.45.164:8888', changeOrigin: true, } 以上,本地独立运行了基于前后端分离的调试环境;可以看见前端读取的是mysql库中kafka集群配置; 二、功能架构 按照官方提供的功能架构图理解,因为logi-kafka-manager的定位是kafka集群全方位管控系统,它以kafka集群为主体,封装和集成了kafka对外提供的用户API,,以kafka集群和topic资源为运营对象,面向应用系统用户(topic使用者)、kafka/管控平台开发者、kafka/管控平台运维者提供便捷的资源管理能力。按照这个思维理解,官方给的功能架构包括:资源层(zk和mysql元数据存储)、引擎层(kafka集群为主体)、网关层(kafka服务基础管理能力)、服务层(高级用户api)、平台层(面向不同用户); 三、部署验证windows环境下的部署/调试环境 这里在win系统下本地kafka+logi-kafka-manager的联调测试验证,用于对于kafka+logi-kafka-manager的源码研究和联调,关于win环境下如何部署zookeeper以及idea中运行kafka集群可以参考之前系列文章:《kafka实践(十二):生产者(KafkaProducer)源码详解和调试》,环境配置如下: 本地启动zookeeper(3.4.12),服务端口2181; idea上本地启动kafka集群(1.0版本),对外暴露9999端口服务,且本地已创建yzg这个topic; idea上本地启动logi-kafka-manager后端模块,参考上面配置;vscode本地启动console前端模块,本地调试环境搭建完毕:http://127.0.0.1:1025/ 本地测试增加kafka集群到logi-kafka-manager内进行统一管理,新增的本地集群的zk地址和kafka地址,就可以统一管理broker和topic,以及后续的资源分配,win下实现环境配置方便源码调整和kafka/管控平台人员的调试; linux环境下生产使用 linux环境下的生产部署使用则更为简单,zk和kafka部署完成后,按照官方文档指引进行前后端统一部署,不再验证; # mvn会调用npm模块下载node依赖 mvn install # application.yml 是配置文件 cp kafka-manager-web/src/main/resources/application.yml kafka-manager-web/target/ cd kafka-manager-web/target/ nohup java -jar kafka-manager-web-2.1.0-SNAPSHOT.jar --spring.config.location=./application.yml >/ dev/null 2 > & 1 &
Tools to understand application developers
For application developers, they are only concerned about which topic under which cluster the data (mostly log data) of their current application system should be sent? Therefore, logi-kafka-manager provides "Topic Management"-"Cluster Management"-"Monitoring alarm" application menu service, which can provide the following services:
Create / apply for an application
Apply to the current application within "Topic Management" to match the topic resources you need to use (adjustable quotas and partitions)
Application for kafka Cluster access
Customize alarm rules in "Monitoring alarm"; (custom monitoring of consumption offset, consumption rate, cluster status, topic status, and real-time early warning, which is very useful! )
Resource application service
Kafka/ controls developers
For kafka/ management and control developers, you need to comprehensively manage the application system, kafka cluster and kafka management platform, add the "OPS Management and Control" menu, provide cluster OPS capabilities such as server.config configuration for kafka clusters and user billing management capabilities, and provide the following services:
Create / apply an application within "Topic Management" for the current application, match the required topic resources (adjustable quotas and partitions) customize alarm rules within "Monitoring alarm", customize alarm rules kafka cluster access, upgrade and configuration modification capabilities, application management capabilities platform user billing management capabilities resource application service kafka/ control operation and maintenance personnel
For kafka/ management, control, operation and maintenance personnel, you need to find and solve kafka cluster problems and fix them quickly, provide "expert services", list common problems and solutions, and provide the following services:
Create / apply an application within "Topic Management" for the current application, match the required topic resources (adjustable quotas and partitions) customize alarm rules within "Monitoring alarm", customize alarm rules kafka cluster access, upgrade, configuration modification capabilities, application management capabilities, user billing management capabilities, kafka cluster common problems and repair solution resource application services 5. Suggestions for the community.
For logi-kafka-manager tools, look forward to integrating Mirror-maker data transmission tools across computer rooms to make it easier to configure real-time data transmission and efficiency monitoring!
On how to open source Logi-KafkaManager to share here, I hope that the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.