In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Thinking and landing of migrating Dubbo micro-services to k8s
When it comes to containerization, we have to mention kubernetes, a cluster orchestration system, which is an open source system for automatic deployment, scaling and management of containerized applications.
Kubernetes groups the containers that make up the application into logical units for ease of management and discovery. Kubernetes builds on 15 years of experience in its own company's production load and incorporates best advice and practices from the community. In 2016, large foreign companies such as Amazon and IBM, as well as domestic Alibaba also natively supported K8S. In recent years, micro-services are also very hot. When it comes to micro-services like spring cloud,spring boot and Ali's open-source micro-service framework dubbo, many of our domestic companies also adopt this micro-service architecture style. It is also due to some problems brought by the traditional single architecture, such as some projects have hundreds of thousands of lines of code, the difference between each module is vague, the logic is confused, the more code, the higher the complexity, the more difficult to solve the problems encountered.
I believe that these are not the only problems. The choice of micro-services is also this trend, which does make our architecture clearer and better manage more service modules.
Tell me why microservices are more suitable for containerization.
The director of cloud architecture at Netflix says the combination of microservices and Docker is subversive. Docker can provide a perfect running environment for micro-services. Kubernetes provides this cluster orchestration system, and the container can easily implement micro-serviced DevOps. When we talk about dubbo later, we will talk about the relationship between microservices and K8s, and why dubbo is more suitable for deployment to K8s.
Tell me what is Dubbo?
Dubbo is the core framework of Alibaba's SOA service-oriented governance solution. 2000 + services provide 3 billion + visits per day, and are widely used in the member sites of Alibaba Group. That is, a software architecture, that is, many Internet companies are using this Dubbo micro service, which is a very good micro service governance framework.
Dubbo is a distributed services framework dedicated to providing high-performance and transparent RPC remote service invocation solutions, as well as SOA service governance solutions.
To put it simply, dubbo is a service framework. If there is no distributed requirement, it does not need to be used. Only when it is distributed, there is a requirement for a distributed service framework like dubbo, and it is essentially a service invocation. To put it bluntly, it is a distributed framework for remote service invocation, and dubbo and our K8s have a very close relationship, as if they are dedicated to K8s for a living. Because K8s is also a distributed basic service framework, it is very appropriate for us to deliver dubbo micro services into K8s.
What can Dubbo do?
Transparent remote method invocation, calling remote methods as if they were local methods, with simple configuration without any API intrusion. To put it bluntly, dubbo is made by consumers and providers, consumers to consume provider methods, just like calling local methods, and the configuration is simple.
Soft load balancing and fault-tolerant mechanism can replace F5 and other hardware load balancers in the intranet to reduce costs and reduce a single point. That is, dubbo provides a load balancing mechanism. For example, a provider registers automatically after starting itself. It is scheduled for you by the registry and dispatched to the backend provider according to the scheduling algorithm. No matter how many copies you make, you don't have to worry about whether the copies from the load balancer will be added by itself. It will provide this load balancing mechanism. K8s is also this way of thinking. K8s defines a service. Anyway, the back-end pod enters through this service entrance, and the design method of dubbo and K8s coincides with each other.
Automatic service registration and discovery eliminates the need to write down the address of the service provider. The registry queries the IP address of the service provider based on the interface name, and can smoothly add or delete service providers. This is very important, whether it is the service provider or the consumer, we all need a registration center. You register now, and then provide services to the outside world. When you find me, you will find it through the registry.
Summary: three most typical characteristics
The first is a completely transparent localized call
The second point is the load balancing mechanism.
The third point is the automatic registration and discovery of services.
Then take a look at the architecture diagram of Dubbo
! []
Provider: the provider of the exposed service. In fact, the provider provides some services for procedure invocation, which are based on the rpc mechanism, Remote Procedure Call, such as providing some computing services, storage services or some services connected to the database.
Consumer: the service consumer that invokes the remote service. Mainly provide some web interfaces, web pages mainly provide some ui layer services, put some back-end heavy services in the provider, and then call, that is, the consumer calls the provider, just like calling the local method
Registry: the registry for service registration and discovery.
Monitor: the monitoring center that calculates the call time and call time of the service.
Container: the service running container.
Let's talk about the specific architecture scheme and how to implement it.
First of all, the K8s environment cluster is the first requirement, generally, this must be deployed by yourself, and in addition, you need to make your K8s cluster into a highly available cluster. In this way, you will expand your environment, such as the master node and a master node to help you work. This is the first priority. Many people here must know that it is also your pod persistence. In itself, the life cycle of your pod is very short. If your pod is restarted or due to some special circumstances, then the ip of your pod will change. So here in this environment, you must have to consider its own persistence. For example, if your jenkins is running in K8s, then you need to consider its working directory, jenkins_home, the working directory. That is, the files generated by jenkins and the working directory generated when you pull the code will actually fall under your working directory. If you deploy jenkins in K8s, we need to use this persistence function to share and mount this working directory in a directory on our other servers, if you don't do persistence. Your working directory will be lost when your pod is restarted. Of course, your container may use some maintenance-related commands. If pod restarts your command, it will also be lost, or if you want to do login-free for this container, you need to generate this ssh-keygen in your container. In this case, your container restart this directory will also be lost, you can also deploy jenkins to K8s. However, it is relatively troublesome to maintain this container.
We are deploying our jenkins to our single server. In addition, we can use a jenkins to publish tests and online environments in the way we use the slave architecture used by jenkins.
If you deploy to a separate server, here I start it in the form of this war package. Here, in order to modify the working directory of our jenkins, the working directory itself is placed under the working directory under .jenkins, so I modify this directory, the directory of tomcat.catalina.sh, and add export JENKINS_HOME=/opt/k8s/project to the shared directory, and take effect in your environment variables. Start your war package so that all your files will land on the shared storage. This is what I used the nfs, the shared storage to do this.
There are also many other shared storage, such as this, such as glusterFS,ceph, and so on.
Then the next step is to deploy our coredns, which is mainly to complete a domain parsing of our K8s service. For example, let's install a busybox test tool to test the kubernetes.default in our cluster. Generally, if there is no problem with your coredns and your network CNI plug-in, it can be parsed normally. The following is the deployment of this elk log file system. I deployed it in K8s for ease of management, and of course it is possible to deploy it outside. I use elasticsearch,filebeat,kibana here to collect logs from our microservice dubbo project.
In this case, this pod type is the intimacy pod, that is, to deploy two containers in one pod.
Then deploy our logs in our container through this filebeat, collect our logs through this collector, output them to es, and then give them to kibana to output the logs of our dubbo microservice. This is the piece.
In addition, we have an external unified entrance to our restful interface. Here we publish through cluster IP in K8s, do reverse proxy through nginx to our slb service equalizer, and provide a public network address to the external unified entrance.
Then we use jenkins to publish our project and deploy it to our K8s. At present, we use Jenkins to configure free release, and the requirements of each project are different. We use the script we write to release, or you can use this pipeline to publish this, because we are in the form of gitlab to host the code, so we need to log in to gitlab without interaction through jenkins. First, ssh-keygen in jenkins to generate id_rsa id_rsa.pub, one is the private key and the other is the public key. Store the private key in the key on jenkins, and put the public key pub in the trust in gitlab. Configure git to perform parametric construction, select maven, deploy all configurations in advance, put the jdk environment and maven under / etc/profile and execute them, and then build and execute them. Under the target of your project, there will be this jar package, or the application. Here is the application. We encapsulate the application into a container by writing dockerfile, and then give him a jre loop mirror to deploy the service, then upload it to our harbor repository, execute it through the yaml of K8s, and publish our service.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 238
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.