Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use stateless deployment Deployment

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article introduces the knowledge of "how to use stateless deployment Deployment". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Create a cluster

When creating a cluster, planning and selecting the optimized cluster configuration can greatly reduce the later operation and maintenance work, and some of the cluster configurations can no longer be modified or troublesome after they are established.

Cluster planning

Network Planning:

Terway is a network plug-in developed by Ali Cloud CCS. It is functionally compatible with Flannel. If it is conservative, use Flannel.

Pod Network CIDR

Default 16 large network segment, valid network segment or its subnet 10.0.0.0amp 8172.16-31.0.0amp 12-16192.168.0.0Uniplex 16

Service CIDR

Default IP address range of 20. Optional: 10.0.0.0amp 16-24172.16-31.0.0mp 16-24192.168.0.0amp 16-24

The network segment cannot be duplicated in conflict and cannot be modified after it is established.

Multiple switches in multiple areas.

Public network access to ApiServer

For online and other clusters with high security requirements, you can choose not to expose apiserver, but only private SLB, but this cannot be published using cloud effect.

Daily pre-dispatching and other clusters can expose the public network from SLB to apiserver, establish access control for slb immediately after the cluster is established, and restrict slb access only to cloud effect.

Note: almost every security vulnerability in K8s is related to ApiServer. For online K8s clusters, it is necessary to upgrade the patch in time, or do not open the public network apiserver, and use strict security groups and access control.

Security group

Set the security group to limit the access scope for master and worker machines.

Master machine planning

For high availability, 3 nodes are generally used. The Master selection rules are as follows:

Number of nodes master specification 1-5 4C8G6-20 nodes 4C16G21-100nodes 8C32G100-200nodes 16C64G

The storage of master machines is recommended for high-performance 50-100G SSD because it runs ETCD and the operating system takes up no more than 8G.

Worker machine planning

32C64G ECS

Storage. System disk: 100g SSD, data disk: 400g efficient cloud disk

Operating system: centos 7.4 64-bit

Aliyun first introduces the Dragon Machine. If there is no Dragon Machine, choose high-end ECS. The configuration specification is multiplied by a certain multiple according to the deployed POD specification. For example, for Java application pod, generally choose 4C8G 32C64G or 64C128G, then purchase 32C64G or 64C128G. Set a fixed request/limit for pod when setting up and deploying.

The machine configuration we chose:

Cluster establishment and configuration

Set up when establishing a cluster:

Cluster is established through the console. Ali Cloud CCS provides a very simple one-click deployment cluster function, and completes the establishment of K8S cluster through the wizard.

Set up the master,worker node according to the above plan, mount / var/lib/docker to the data disk

Set a reasonable Pod network CIDR, Service CIDR ip network segment

Set a reasonable security policy and whether to expose apiserver (those that need to be directly published by cloud effect, open public network exposure, and strict access control)

Ingress chooses security and can use private network. If you need a public network, it can be easily established in the console and access control can be done at the same time.

Kube-proxy mode, because of the performance problems caused by iptables mode locking iptables when updating a rule, it is recommended to use IPVS mode

The number of POD of nodes. Default 128 is too large. It is impossible for a node to deploy so many. It is recommended to change it to 64.

Node service port access (NodePort,SLB) can be appropriately expanded, and the default is generally sufficient.

Cluster configuration modification:

Expand the capacity of the cluster and add existing nodes (for node configuration, refer to the above, mount data disk / var/lib/docker)

Worker nodes are modified or removed:

Kubectl drain-- ignore-daemonsets {node.name}

Kubectl delete node {node.name}

Add existing nodes to the cluster

Namespace:

Set up a namespace according to the application grouping, and set the resource quota and limit of the NameSpace for the application grouping that needs to be restricted.

Authorization:

How do sub-accounts authorize RBAC to other sub-accounts?

Set permissions by application personnel through the fortress machine

Stateless deployment

Use stateless deployment Deployment to release in batches.

ApiVersion: apps/v1beta2kind: Deploymentmetadata: annotations: deployment.kubernetes.io/revision:'34 tags Map service labels: app: {app_name}-aone name: {app_name}-aone-1 namespace: {app_name} spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: {app_name}-aone# batch restart update policy strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: labels: App: {app_name}-aone spec: containers: # Environment variable increases the time zone-env:-name: TZ value: Asia/Shanghai-image: >-registry-vpc.cn-north-2-gov-1.aliyuncs.com/ {namespace} / {app_name}: 20190820190005 imagePullPolicy: Always # Perform graceful offline removal service registration lifecycle: preStop: exec: command:-sudo -'- u'- admin-/ home/ {user_name} / {app_name} / bin/appctl.sh before startup -{app_name}-stop # Survival check It is strongly recommended to set livenessProbe: failureThreshold: 10 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 5900 timeoutSeconds: 1 name: {app_name}-aone # Readiness check It is strongly recommended to set readinessProbe: failureThreshold: 10 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 5900 timeoutSeconds: 1 # resource limit This must be reasonably set resources: limits: cpu:'4' memory: 8Gi requests: cpu:'4' memory: 8Gi terminationMessagePath: / dev/termination-log terminationMessagePolicy: File # log storage directory, mapped to the node's / var/lib/docker/logs data disk Set the application log directory to the key of the private image repository under / home/ {user_name} / logs volumeMounts:-mountPath: / home/ {user_name} / logs name: volume-1553755418538 dnsPolicy: ClusterFirst # # Obtain imagePullSecrets:-name: {app_name}-987 restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 # log storage directory from the security field / var/lib/docker/logs data disk volumes:-hostPath: path: / var/lib/docker/logs/ {app_name} type:''name: volume-1553755418538 mapped to the node

Service Settin

Because the Cloud Controller Manager of CCS will delete the SLB associated with service synchronously, in order to prevent service configuration modification from mistakenly deleting slb failures, and resulting in pits that need to be modified such as domain name and security configuration, it is strongly recommended that service be decoupled from slb, service adopts NodePort, and slb establishes back-end servers to point to cluster nodes. If you need to transparently transmit real IP and consider load balancing, you need to abide by certain configuration rules and methods. Refer to this article.

NodePort:

ApiVersion: v1kind: Servicemetadata: name: {app_name} namespace: {namespaces} spec: clusterIP: 10.1.50.65 policies are related to whether the real IP externalTrafficPolicy: Cluster ports:-name: {app_name}-80-7001 nodePort: 32653 port: 80 protocol: TCP targetPort: 7001-name: {app_name}-5908-5908 nodePort: 30835 port: 5108 protocol: TCP TargetPort: 5108 selector: app: {app_name} sessionAffinity: None type: NodePortstatus: loadBalancer: {}

Then on the load balancer management page, select the backend server pointing to the worker machine of the cluster, set the port to the port of the above service: 32653, and complete the configuration, so that when the cluster service is modified, deleted and rebuilt, the slb will not be deleted by the cluster CCM, and the domain name, security and other configuration modifications will not be involved. At the same time, you can set some policies and cut streams in batches when you need to upgrade and modify the service configuration.

That's all for "how to use stateless deployment Deployment". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report