Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Deployment of mainstream JAVA applications for Kubernetes operation and maintenance

2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Deploy JAVA project based on kubernetes

How is it possible to migrate the project to the k8s platform?

1 make a mirror image

2 Controller Management Pod

3 Pod data persistence

4 exposed application

5 release applications to the outside world

6 log / monitorin

1 making an image is divided into three steps: the first basic image is based on which operating system, such as Centos7 or other

Step 2: middleware image, such as service image, runs like nginx service and tomcat service.

The third step is the project image, which is on top of the service image. Package your project so that the project can run in your service image.

Generally speaking, our operation and maintenance personnel do our image well in advance, and developers can use this image directly. This image must be in line with the current environment.

2 Controller Management pod

That is, K8s to deploy this image, usually we will take the controller to deploy, the most commonly used is deployment

Deployment: stateless deployment

StatefulSet: stateful deployment

DaemonSet: daemon deployment

Job & CronJob: batch processing

What's the difference between stateless and stateless?

Stateful ones have identities, such as network ID, storage, these two are planned in advance and start / stop in an orderly manner.

Persistence and non-persistence

3 Pod data persistence

Pod data persistence is mainly because it is said to an application, for example, to develop a project, whether the project has been landed in a local file, and if so, it is guaranteed that it will last, then pod data persistence must be used.

There are generally three types of data during container deployment:

The initial data required at startup, which can be a configuration file

Temporary data generated during startup, which needs to be shared among multiple containers

Persistent data generated during startup

4 exposed application

In K8s, when deploying a deployment, it cannot be accessed externally, that is, if other applications want to access the deployed deployment, it cannot find how to access it. Why do you say so? because deployment is generally deployed with multiple copies, it may be distributed on different nodes, and the reconstruction pod ip will also change, and the rerelease will also change, so there is no way to fix which pod to access, even if it is fixed. Other pod cannot be accessed either. If you want multiple pod to provide services, you must add a load balancer and provide an access entry. Only this unified access entry can be forwarded to multiple backend pod. As long as you access this Cluster IP, you can forward it to the backend pod.

Service

Service defines the logical collection of Pod and the policy to access it.

Service is introduced to solve the dynamic changes of Pod, providing service discovery and load balancing.

Use CoreDNS to resolve Service names

5 release applications to the outside world

After exposure, that is, users need to visit, for example, to set up an e-commerce website for users to visit. Ingress is complementary to service, which makes up for each other. Service mainly provides access within the cluster, and can also expose a TCP/UDP port, while ingress is mainly a layer 7 forwarding, that is, it provides a unified entry. As long as you visit ingress controller, it can help you forward all your deployed projects, that is, all projects are accessed using the domain name.

The difference between traditional deployment and K8S deployment

First of all, developers deploy the code to your code repository, the mainstream use of Git or gitlab. After submitting the code through the CI/CD platform, you need to pull, compile and build the code to generate a War package, then send it to Ansible and send it to the CVM / physical machine, then expose the project through load balancing, and then there will be databases, monitoring systems, and log systems to provide related services.

First of all, the developer puts the code in the code repository, and then uses jenkins to pull the code, compile it, and upload it to our image repository. Here, the code is packaged into an image, rather than the war or jar package that is deliberately executed. This image contains the running environment and project code of your project. This image can be placed on any docker and can be accessed. First of all, make sure that it can be deployed on docker. Then deploy to K8s, and put the printed images in the image warehouse to centrally manage these images, because dozens or hundreds of images are generated every day, which must be managed through the image warehouse. Here, a script may be written to connect to k8smaster, and K8s will dispatch these pod according to its own deployment, and then release our application through ingress for users to access. Each ingress will be associated with a set of pod, and service will create a load balance for this group of pod, using service to distinguish the Pod on these nodes, then the database is placed outside the cluster, and the monitoring system log system can also be placed in the K8s cluster for deployment or outside.

We put it in the K8s cluster, and it is not particularly sensitive, it is mainly used for operation, development and debugging, and will not affect our business, so we give priority to deployment in K8s.

Now let's deploy a JAVA project to our K8s

First, install an openjdk or maven to compile

[root@k8s-master ~] # yum-y install java-1.8.0-openjdk.x86_64 maven [root@k8s-master ~] # java- versionopenjdk version "1.8.0,222" OpenJDK Runtime Environment (build 1.8.0_222-b10) OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)

Then we pull the code to the local general Dockerfile and put it in the same directory as our code.

[root@k8s-master tomcat-java-demo-master] # lsdb Dockerfile LICENSE pom.xml README.md src [root@k8s-master tomcat-java-demo-master] # vim Dockerfile FROM lizhenliang/tomcatLABEL maintainer zhaochengchengRUN rm-rf / usr/local/tomcat/webapps/*ADD target/*.war / usr/local/tomcat/webapps/ROOT.war

II. Compile

Then here we need to configure the domestic source of maven, so it will be faster.

[root@k8s-master CI] # vim / etc/maven/settings.xml

Central central aliyun maven https://maven.aliyun.com/repository/public [root@k8s-master tomcat-java-demo-master] # mvn clean package-D maven test.skip=true [root@k8s-master tomcat-java-demo-master] # lsdb Dockerfile LICENSE pom.xml README.md src target [root@k8s-master tomcat-java-demo-master] # cd target/ [root@k8s-master target] # lsclasses generated-sources ly-simple-tomcat-0.0.1-SNAPSHOT Ly-simple-tomcat-0.0.1-SNAPSHOT.war maven-archiver maven-status [root@k8s-master tomcat-java-demo-master] # cd target/

We will use this compiled war package, then make an image and upload it to our Harbor repository.

[root@k8s-master target] # lsclasses ly-simple-tomcat-0.0.1-SNAPSHOT maven-archivergenerated-sources ly-simple-tomcat-0.0.1-SNAPSHOT.war maven-status [root@k8s-master tomcat-java-demo-master] # docker build-t 192.168.30.24/library/java-demo:latest.

Third, and then upload it to our image warehouse

[root@k8s-master tomcat-java-demo-master] # docker login 192.168.30.24Username: adminPassword: Error response from daemon: Get https://192.168.30.24/v2/: dial tcp 192.168.30.24:443: connect: connection refused

The error is reported here. In fact, we need to write the trust to the harbor repository under each docker, and the uploaded image will also be used later.

[root@k8s-master java-demo] # vim / etc/docker/daemon.json {"registry-mirrors": ["http://f1361db2.m.daocloud.io"]," insecure-registries ": [" 192.168.30.24 "]}

Just wait to record the push.

[root@k8s-master tomcat-java-demo-master] # docker push 192.168.30.24/library/java-demo:latest

Fourth, the controller manages pod

Write deployment. General projects are written under a custom namespace. The name is written in the project name for easy memory. Name: tomcat-java-demo

Namespace: test

In addition, there is the name of the next project, which is divided into several, generally composed of a lot of components, so you can write the name of app, such as components 1, 2, 3, at least the label has these two dimensions.

Project: www

App: java-demo

In addition, the image pull, which warehouse to download, here I suggest that the project name of the image warehouse is the same as what we defined, to avoid confusion.

I re-tagged it and sent it to our private image repository.

[root@k8s-master java-demo] # docker tag 192.168.30.24/library/java-demo 192.168.30.24/tomcat-java-demo/java-demo [root@k8s-master java-demo] # docker push 192.168.30.24/tomcat-java-demo/java-demo:latest

Change the image address, too.

ImagePullSecrets:-name: registry-pull-secret containers:-name: tomcat image: 192.168.30.24/tomcat-java-demo/java-demo:latest

Now start creating our yaml

Create a namespace for the project

[root@k8s-master java-demo] # vim namespace.yaml apiVersion: v1kind: Namespacemetadata: name: test [root@k8s-master java-demo] # kubectl create-f namespace.yaml namespace/test created [root@k8s-master java-demo] # kubectl get nsNAME STATUS AGEdefault Active 22hkube-node-lease Active 22hkube-public Active 22hkube-system Active 22htest Active 5s

Create a secret to ensure the authentication information of our harbor image repository. Be sure to write the namespace of our project here.

[root@k8s-master java-demo] # kubectl create secret docker-registry registry-pull-secret-- docker-username=admin-- docker-password=Harbor12345-- docker-email=111@qq.com-- docker-server=192.168.30.24-n testsecret/registry-pull-secret created [root@k8s-master java-demo] # kubectl get nsNAME STATUS AGEdefault Active 23hkube-node-lease Active 23hkube-public Active 23hkube-system Active 23htest Active 6m39s [root@k8s-master java-demo] # kubectl get secretNAME TYPE DATA AGEdefault-token-2vtgm kubernetes.io/service-account-token 3 23hregistry-pull-secret kubernetes.io/dockerconfigjson 1 46s [root@k8s-master java-demo] # vim deployment.yaml apiVersion: apps/v1beta1kind: Deploymentmetadata: name: tomcat-java-demo namespace: testspec: Replicas: 3 selector: matchLabels: project: www app: java-demo template: metadata: labels: project: www app: java-demo spec: imagePullSecrets:-name: registry-pull-secret containers:-name: tomcat image: 192.168.30.24/tomcat-java-demo/java-demo:latest imagePullPolicy: Always ports:-containerPort : 8080 name: web protocol: TCP resources: requests:cpu: 0.5 memory: 1Gi limits: cpu: 1 memory: 2Gi livenessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 60 timeoutSeconds: 20 readinessProbe: httpGet : path: / port: 8080 initialDelaySeconds: 60 timeoutSeconds: 20 [root@k8s-master java-demo] # kubectl get pod-n testNAME READY STATUS RESTARTS AGEtomcat-java-demo-6d798c6996-fjjvk 1 Running 0 2m58stomcat-java-demo-6d798c6996-lbklf 1 Running 0 2m58 stomcat java- Demo-6d798c6996-strth 1/1 Running 0 2m58s

Another thing is to expose a Service, and the tags here should also be consistent, otherwise he will not be able to provide services if he cannot find the corresponding tags. Here, we use ingress to access the release, and we can use ClusterIP directly.

[root@k8s-master java-demo] # vim service.yaml apiVersion: v1kind: Servicemetadata: name: tomcat-java-demo namespace: testspec: selector: project: www app: java-demo ports:-name: web port: 80 targetPort: 8080 [root@k8s-master java-demo] # kubectl get pod Svc-n testNAME READY STATUS RESTARTS AGEpod/tomcat-java-demo-6d798c6996-fjjvk 1 + 1 Running 0 37mpod/tomcat-java-demo-6d798c6996-lbklf 1 + + 1 Running 0 37mpod/tomcat-java-demo-6d798c6996-strth 1 + + 1 Running 0 37mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEservice/tomcat-java-demo ClusterIP 10.1.175.191 80/TCP 19s

Test access to our project, it is possible, and now to release it through ingress

[root@k8s-master java-demo] # curl 10.1.175.191 Application case of bringing Beauty Home

Now deploy an ingress-nginx controller, which can be found on the Internet, and officially. I deploy here in the way of DaemonSet, so each node will run a controller.

[root@k8s-master java-demo] # kubectl get pod-n ingress-nginxNAME READY STATUS RESTARTS AGEnginx-ingress-controller-g95pp 1 + 1 Running 0 3m6snginx-ingress-controller-wq6l6 1 + + 1 Running 0 3m6s

Publish the application

Note two points here, the first is the domain name of the website, and the other is the namespace of service

[root@k8s-master java-demo] # kubectl get pod Svc-n testNAME READY STATUS RESTARTS AGEpod/tomcat-java-demo-6d798c6996-fjjvk 1 + 1 Running 0 53mpod/tomcat-java-demo-6d798c6996-lbklf 1 + + 1 Running 0 53mpod/tomcat-java-demo-6d798c6996-strth 1 + + 1 Running 0 53mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEservice/tomcat-java-demo ClusterIP 10.1.175.191 80/TCP 16m [root@k8s-master java-demo] # vim service.yaml [root@k8s-master java-demo] # kubectl create-f ingress.yaml apiVersion: extensions/v1beta1kind: Ingressmetadata: name: tomcat-java-demo namespace: testspec: rules:-host: java.maidikebi.com http: paths:-path: / backend: serviceName: tomcat-java-demo servicePort: 80

In addition, my side is testing, so bind my local hosts to access

Add the domain name and node ip to the hosts file and you can access our project.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report