In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly explains "how to deploy high availability configuration center apollo in K8s". The explanation in this article is simple, clear and easy to learn and understand. Please follow the editor's train of thought to study and learn "how to deploy high availability configuration center apollo in K8s".
Deployment success page
The following is the login page of apollo after the deployment is complete
Login page
Enter the user name and password: apollo/admin, and the deployment environment completes the portal page diagram.
Page after deployment of apollo
On the dashboard deployment page of K8s, this paper deploys three environments: dev, fat and pro.
Deployment environment completes the deployment process of dashboard diagram after K8s
This article deploys with the latest version of apollo: 1.7.1, so all of the following builds are based on the current version.
First, build an image
First of all, download the source code from git, which can be downloaded from github: https://github.com/ctripcorp/apollo; can also be downloaded from gitee: https://gitee.com/nobodyiam/apollo, the domestic one will be faster. Then go to the directory
/ scripts/apollo-on-kubernetes
To build a mirror image.
1. Directly use the compiled package to install and obtain the apollo package
You can download it directly from the official website because github is too slow. It is recommended to download directly from my Baidu Cloud.
Download is slow. Use Baidu Cloud directly.
Link: https://pan.baidu.com/s/1eLL2ocYE1uzXcvzO2Y3dNg
Extraction code: nfvm
Download the pre-typed java package from https://github.com/ctripcorp/apollo/releases
(1) enter scripts/apollo-on-kubernetes/ to execute wget https://github.com/ctripcorp/apollo/releases/download/v1.7.1/apollo-portal-1.7.1-github.zip (2) enter scripts/apollo-on-kubernetes/ to execute wget https://github.com/ctripcorp/apollo/releases/download/v1.7.1/apollo-adminservice-1.7.1-github.zip (3) enter scripts/apollo-on-kubernetes/ Execute wget https://github.com/ctripcorp/apollo/releases/download/v1.7.1/apollo-configservice-1.7.1-github.zip
2. Extract the compressed package and obtain the program jar package
Don't forget to rename and remove the version number.
Decompress apollo-portal-1.7.1-github.zip to get apollo-portal-1.7.1.jar, rename it to apollo-portal.jar, put it to scripts/apollo-on-kubernetes/apollo-portal-server, decompress apollo-adminservice-1.7.1-github.zip to get apollo-adminservice-1.7.1.jar, rename it to apollo-adminservice.jar Put it to scripts/apollo-on-kubernetes/apollo-admin-server, decompress apollo-configservice-1.7.1-github.zip, get apollo-configservice-1.7.1.jar, rename it to apollo-configservice.jar, and put it in scripts/apollo-on-kubernetes/apollo-config-server.
3. Build an image
Note: because many places have to be changed at the same time, to determine the namespace when building, I use zizai.
To build the following images: alpine-bash-3.8-image,apollo-config-server,apollo-admin-server and apollo-portal-server, the corresponding image files, in the corresponding directory:
The construction image needs to be executed in the corresponding Dockerfile peer directory.
For example, go to scripts/apollo-on-kubernetes/apollo-config-server and execute:
Docker build-t apollo-config-server:v1.7.1.
Note that a total of 4 images need to be built. The overall idea is: first build the image, then type tag, and then push it to the warehouse.
Under the corresponding directory, summarize the overall script as follows:
Alpine-bash-3.8-image image: docker build-t alpine-bash:3.8. Image corresponding to docker tag alpine-bash:3.8 hub.thinkinpower.net/zizai/alpine-bash:3.8 docker push hub.thinkinpower.net/zizai/alpine-bash:3.8 apollo: docker build-t apollo-config-server:v1.7.1. Docker tag apollo-config-server:v1.7.1 hub.xx.net/zizai/apollo-config-server:v1.7.1 docker push hub.xx.net/zizai/apollo-config-server:v1.7.1 docker build-t apollo-admin-server:v1.7.1. Docker tag apollo-admin-server:v1.7.1 hub.xx.net/zizai/apollo-admin-server:v1.7.1 docker push hub.xx.net/zizai/apollo-admin-server:v1.7.1 docker build-t apollo-portal-server:v1.7.1. Docker tag apollo-portal-server:v1.7.1 hub.thinkinpower.net/zizai/apollo-portal-server:v1.7.1 docker push hub.thinkinpower.net/zizai/apollo-portal-server:v1.7.1
Deploy apollo to kubernetes
1. Create a database script
Let me explain:
In the actual production environment, the performance problems of disks implemented through distributed storage will be very prominent in IO-intensive applications such as mysql. Therefore, in practical applications, applications such as mysql will not be managed directly in kubernetes, but will be deployed independently with dedicated servers. Stateless applications like web will still run in kubernetes. At this time, there are two ways for web servers to connect to databases other than kubernetes management: one is to directly connect to the physical server IP where the database is located, and the other is to directly map the external server to a service within kubernetes with the help of kubernetes's Endpoints.
We use the external mysql as the database and will not deploy mysql into K8s.
Execute the script under the directory scripts/apollo-on-kubernetes/db. The Apollo server needs two databases: ApolloPortalDB and ApolloConfigDB. One database script for each configuration config, one database script for portal. The database script can be found in: https://github.com/ctripcorp/apollo/tree/master/scripts/apollo-on-kubernetes/db, which is already available in git. If apollo opens four environments, namely, dev, test-alpha, test-beta, and prod, import the file under scripts/apollo-on-kubernetes/db in MySQL.
2. Deploy the yaml file of K8s
The yaml on the official website can be downloaded and modified, because I use the image of my own repository and test it many times. I mainly have the following modifications:
(1) the security prompt should be deleted from the configuration file:
SecurityContext: privileged: true
(2) add the key of the warehouse:
ImagePullSecrets:-name: registry-harbor
(3) modify the following to pull the image every time: Always
ImagePullPolicy: Always
(4) add configuration information for mysql
I only use 3 environments, and the files that need to be modified are shown in the figure:
Because there are a lot of changes, I will list each file below. I will only take the development environment apollo-env-dev as an example, and the rest will just be modified accordingly. During the implementation, it is recommended that you execute the following files in the order of (3), (2) and (1) below.
(1), service-apollo-admin-server-dev.yaml
-# configmap for apollo-admin-server-dev kind: ConfigMap apiVersion: V1 metadata: namespace: zizai name: configmap-apollo-admin-server-dev data: application-github.properties: | spring.datasource.url = jdbc:mysql://service-mysql-for-apollo-dev-env.zizai:3306/DevApolloConfigDB?characterEncoding=utf8 spring.datasource.username = admin spring.datasource.password = mysql-admin eureka.service.url = http://statefulset-apollo-config-server -dev-0.service-apollo-meta-server-dev:8080/eureka/ Http://statefulset-apollo-config-server-dev-1.service-apollo-meta-server-dev:8080/eureka/, Http://statefulset-apollo-config-server-dev-2.service-apollo-meta-server-dev:8080/eureka/-kind: Service apiVersion: v1 metadata: namespace: zizai name: service-apollo-admin-server-dev labels: app: service-apollo-admin-server-dev spec: ports:-protocol: TCP port: 8090 targetPort: 8090 selector: app: pod-apollo-admin-server-dev type: ClusterIP sessionAffinity ClientIP-kind: Deployment apiVersion: apps/v1 metadata: namespace: zizai name: deployment-apollo-admin-server-dev labels: app: deployment-apollo-admin-server-dev spec: replicas: 3 selector: matchLabels: app: pod-apollo-admin-server-dev strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: app: pod-apollo -admin-server-dev spec: imagePullSecrets: # dokcer repository password Remove what you don't need-name: registry-harbor affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution:-weight: 100podAffinityTerm: labelSelector: matchExpressions:-key: app operator: In values: -pod-apollo-admin-server-dev topologyKey: kubernetes.io/hostname volumes:-name: volume-configmap-apollo-admin-server-dev configMap: name: configmap-apollo-admin-server-dev items:-key: application-github.properties path: application-github.properties InitContainers:-image: hub.thinkinpower.net/zizai/alpine-bash:3.8 imagePullPolicy: Always name: check-service-apollo-config-server-dev command: ['bash' '- c' "curl-connect-timeout 2-max-time 5-retry 60-retry-delay 1-retry-max-time 120 service-apollo-config-server-dev.zizai:8080"] containers:-image: hub.thinkinpower.net/zizai/apollo-admin-server:v1.7.1 imagePullPolicy: Always name: container-apollo-admin-server-dev ports: -protocol: TCP containerPort: 8090 volumeMounts:-name: volume-configmap-apollo-admin-server-dev mountPath: / apollo-admin-server/config/application-github.properties subPath: application-github.properties env:-name: APOLLO_ADMIN_SERVICE_NAME value: "service-apollo-admin-server-dev.zizai" readinessProbe: tcpSocket: port: 8090 initialDelaySeconds: 10 periodSeconds: 5 livenessProbe: tcpSocket: port: 8090 initialDelaySeconds: 120 periodSeconds: 10 dnsPolicy: ClusterFirst restartPolicy: Always
(2), service-apollo-config-server-dev.yaml
-# configmap for apollo-config-server-dev kind: ConfigMap apiVersion: V1 metadata: namespace: zizai name: configmap-apollo-config-server-dev data: application-github.properties: | spring.datasource.url = jdbc:mysql://service-mysql-for-apollo-dev-env.zizai:3306/DevApolloConfigDB?characterEncoding=utf8 spring.datasource.username = admin spring.datasource.password = mysql-admin eureka.service.url = http://statefulset-apollo-config-server -dev-0.service-apollo-meta-server-dev:8080/eureka/ Http://statefulset-apollo-config-server-dev-1.service-apollo-meta-server-dev:8080/eureka/, Http://statefulset-apollo-config-server-dev-2.service-apollo-meta-server-dev:8080/eureka/-kind: Service apiVersion: v1 metadata: namespace: zizai name: service-apollo-meta-server-dev labels: app: service-apollo-meta-server-dev spec: ports:-protocol: TCP port: 8080 targetPort: 8080 selector: app: pod-apollo-config-server-dev type: ClusterIP clusterIP : None sessionAffinity: ClientIP-kind: Service apiVersion: v1 metadata: namespace: zizai name: service-apollo-config-server-dev labels: app: service-apollo-config-server-dev spec: ports:-protocol: TCP port: 8080 targetPort: 8080 nodePort: 30002 selector: app: pod-apollo-config-server-dev type: NodePort sessionAffinity: ClientIP-- kind: StatefulSet apiVersion: apps/v1 metadata: namespace : zizai name: statefulset-apollo-config-server-dev labels: app: statefulset-apollo-config-server-dev spec: serviceName: service-apollo-meta-server-dev replicas: 3 selector: matchLabels: app: pod-apollo-config-server-dev updateStrategy: type: RollingUpdate template: metadata: labels: app: pod-apollo-config-server-dev spec: imagePullSecrets: # dokcer warehouse password Remove what you don't need-name: registry-harbor affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution:-weight: 100podAffinityTerm: labelSelector: matchExpressions:-key: app operator: In values: -pod-apollo-config-server-dev topologyKey: kubernetes.io/hostname volumes:-name: volume-configmap-apollo-config-server-dev configMap: name: configmap-apollo-config-server-dev items:-key: application-github.properties path: application-github.properties containers: -image: hub.thinkinpower.net/zizai/apollo-config-server:v1.7.1 imagePullPolicy: Always name: container-apollo-config-server-dev ports:-protocol: TCP containerPort: 8080 volumeMounts:-name: volume-configmap-apollo-config-server-dev mountPath: / apollo-config -server/config/application-github.properties subPath: application-github.properties env:-name: APOLLO_CONFIG_SERVICE_NAME value: "service-apollo-config-server-dev.zizai" readinessProbe: tcpSocket: port: 8080 initialDelaySeconds: 10 PeriodSeconds: 5 livenessProbe: tcpSocket: port: 8080 initialDelaySeconds: 120 periodSeconds: 10 dnsPolicy: ClusterFirst restartPolicy: Always
(3), service-mysql-for-apollo-dev-env.yaml
-# set service kind: Service apiVersion: v1 metadata: namespace: zizai name: service-mysql-for-apollo-dev-env labels: app: service-mysql-for-apollo-dev-env spec: ports:-protocol: TCP port: 3306 targetPort: 3306 type: ClusterIP sessionAffinity: None-kind: Endpoints apiVersion: v1 metadata: namespace: zizai name: service-mysql-for-apollo-dev- Env subsets:-addresses:-ip: 10.29.254.48 ports:-protocol: TCP port: 3306
3. Add Ingress
The example given on the official website is to use NodePort of K8s to access, but in practice, we use Ingress to access Portal.
Note: because we deploy portal with multiple instances, Ingress needs to add persistent sessions, or the page will not be able to log in and enter the portal page. The details are:
Metadata: annotations: nginx.ingress.kubernetes.io/affinity: "cookie" # solve session persistence nginx.ingress.kubernetes.io/session-cookie-name: "route" nginx.ingress.kubernetes.io/session-cookie-expires: "172800" nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
The code example for ingress is as follows:
ApiVersion: extensions/v1beta1 kind: Ingress metadata: name: zizai-apollo-portal namespace: zizai annotations: nginx.ingress.kubernetes.io/affinity: "cookie" # resolve session persistence nginx.ingress.kubernetes.io/session-cookie-name: "route" nginx.ingress.kubernetes.io/session-cookie-expires: "172800" nginx.ingress.kubernetes.io/session-cookie-max-age: "172800" spec: rules: -host: zizai-apollo-portal.test.thinkinpower.net http: paths:-path: / backend: serviceName: service-apollo-portal-server servicePort: 8070
4. Configure nginx
Add nginx to Ingress:
Nginx profile: zizai-apollo-portal.test.thinkinpower.net.conf
Server {listen 80; server_name zizai-apollo-portal.test.thinkinpower.net; access_log / data/logs/nginx/zizai-apollo-portal.test.thinkinpower.net.access.log main; error_log / data/logs/nginx/zizai-apollo-portal.test.thinkinpower.net.error.log; root / data/webapps/zizai-apollo-portal.test.thinkinpower.net/test/static; index index.html index.htm; client_max_body_size 50m Location / {proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://kubernetes; # points to the cluster}}
This allows you to access portal based on the domain name: http://zizai-apollo-portal.test.thinkinpower.net.
(1) the deployment created:
(2) create a deployed copy:
(3) the service created:
(4) ingress created:
(5) the created configuration dictionary:
3. Easy to use
This article will only be simple to use, there will be an article to introduce the detailed use, you can leave a message in this article.
1. Create a project
2. Select an environment to add the variable timeout
3. If you are in the process of adding an environment, the refresh page will be prompted to "add a complementary environment".
Add filling environment thank you for reading, the above is the content of "how to deploy high availability configuration center apollo in K8s". After the study of this article, I believe you have a deeper understanding of the problem of how to deploy high availability configuration center apollo in K8s, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.