Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to solve the problem of deploying K8s applications in DataFlow

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces you how to solve the problem of DataFlow deploying K8s application, the content is very detailed, interested friends can refer to, hope to be helpful to you.

1 preface

For a variety of reasons, the team's Kubernetes is restricted and must be deployed on a specific Node. It was not specified before, so Spring Cloud Data Flow failed to run Task and could not create a Pod. It has been useless to configure according to the official Spring documentation. After looking at the source code, modifying the source code and adding logs, we finally solved it.

2 configuration cannot take effect

When you define your own yaml file and deploy through kubectl apply, the contents of the restriction nodes you add are as follows:

Spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms:-matchExpressions:-key: beta.kubernetes.io/os operator: In values:-linux containers:-name: php-apache image: 'pkslow/hpa-example : latest' ports:-containerPort: 80 protocol: TCP resources: requests: cpu: 200m imagePullPolicy: IfNotPresent

This setting can be successfully deployed.

Modify the configuration of Data Flow as follows:

Spring: cloud: dataflow: task: platform: kubernetes: accounts: default: limits: memory: 1024Mi affinity: nodeAffinity: RequiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms:-matchExpressions:-key: beta.kubernetes.io/os operator: In values:-linux datasource : url: jdbc:mysql://$ {MYSQL_SERVICE_HOST}: ${MYSQL_SERVICE_PORT} / mysql username: root password: ${mysql-root-password} driverClassName: org.mariadb.jdbc.Driver testOnBorrow: true validationQuery: "SELECT 1"

Publish Task via Spring Cloud Data Flow, the error is as follows:

Pods in namespace pkslow can only map to specific nodes, status=Failure

Check the official website and modify the configuration according to the official format:

The modifications are as follows:

Spring: cloud: dataflow: task: platform: kubernetes: accounts: default: limits: memory: 1024Mi affinity: nodeAffinity: {requiredDuringSchedulingIgnoredDuringExecution: {nodeSelectorTerms: [{ MatchExpressions: [{key: 'beta.kubernetes.io/os' Operator: 'In', values: [' linux']}}

It's still wrong. Change it to affinity.nodeAffinity=xxx, or report an error. It's no use adding quotation marks.

Check the log, and there is not much information.

I've been messing around for a long time, but I haven't made much progress. So I checked the source code.

3 View source code 3.1 source code download

Download the source code of Spring Cloud Data Flow, take a look, it is not of much use, and finally release to Kubernetes is released through Spring Cloud Deployer Kubernetes, so download its source code. Be careful not to download the wrong version, we are using version 2.4.0. Or download all directly and switch to the corresponding branch:

$git clone https://github.com/spring-cloud/spring-cloud-deployer-kubernetes.gitCloning into 'spring-cloud-deployer-kubernetes'...remote: Enumerating objects: 65, done.remote: Counting objects: 100% (65 KiB/s 65), done.remote: Compressing objects: 100% (46 KiB/s), done.remote: Total 4201 (delta 26), reused 42 (delta 8), pack-reused 4136Receiving objects: 100% (4201 done.remote 4201), 738.79 KiB | 936.00 KiB/s Done.Resolving deltas: 100% (1478 git branch* 1478), done.$ cd spring-cloud-deployer-kubernetes/$ git branch* master$ git checkout 2.4.0Branch '2.4.0' set up to track remote branch' 2.4.0' from 'origin'.Switched to a new branch' 2.4.0pm $git branch* 2.4.0 master

Build first to ensure success:

$mvn clean install-DskipTests3.2 add log

If you look at the source code, you can't see why the configuration doesn't take effect, so make some logs at the key points to have a look. Find the entrance to publish Task:

KubernetesTaskLauncher#launch (AppDeploymentRequest)

That is, the launch method similar to KubernetesTaskLauncher. Start tracking the process of creating a Kubernetes Pod.

KubernetesTaskLauncher#launch (AppDeploymentRequest) KubernetesTaskLauncher#launch (String, AppDeploymentRequest) AbstractKubernetesDeployer#createPodSpecDeploymentPropertiesResolver#getAffinityRules

Then add log printing to the entire call chain, and note that special strings should be added to the log to increase recognition, such as:

Logger.info ("* pkslow log***:" + affinity.toString ())

After the log is appended, re-build the package, replace the jar package introduced by Data Flow, and rerelease it for testing.

Through the newly added log, it is found that the set Properties has not taken effect, but it is not known why it did not take effect.

4 modify the source code

After a lot of trouble, it is still not solved, but the project is in a hurry to use it, so I think of a way to modify the source code first and make it effective according to the attribute:

If the Affinity is not read, generate one yourself.

After repackaging, replacement, and deployment, no errors are reported, and Task can be executed normally.

5 final solution

The previous solution is only a temporary solution, not a good solution, or to figure out why the configuration did not take effect. So look at the source code again. When looking at the class KubernetesDeployerProperties, I found a clue:

There is no Affinity in the field here.

In addition, starting with the test case (which is a good idea, the test case can tell you a lot of information), you can see the DataFlow configuration use case, as follows:

Therefore, you do not need to configure the prefix Affinity. After modification, the configuration is as follows:

Spring: cloud: dataflow: task: platform: kubernetes: accounts: default: limits: memory: 1024Mi nodeAffinity: {requiredDuringSchedulingIgnoredDuringExecution: {nodeSelectorTerms: [{matchExpressions: [{key: 'beta.kubernetes.io/os', operator:' In', values: ['linux']}}

After the redeployment, it's all right!

6 Summary

This time it was really fooled by Spring, no clear example of the configuration was given, and then the hints given by the official documentation were extremely misleading. It's hard to think of not using the prefix Affinity at first, because there is a standard configuration for Kubernetes and there are hints in the official documentation of Spring. What a crap!

Fortunately, this problem was finally solved by looking at the source code and debugging.

On how to solve the problem of DataFlow deployment of K8s applications is shared here, I hope that the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report