Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Practical course of K8s project

2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Take a small project deployed on K8s as an example to introduce the deployment process of K8s project.

K8s is required. Binary files are available to build K8s clusters. Container deployment is officially recommended. Kubeadm quick deployment is used here.

Use kubeadm 1.13 to build a cluster (there may be some small differences between versions of the build)

5 centos7, minimum version

The hostnames are server1,server2,server3,server4,server5

1: system update: yum update-y

2: modify the system hostname vim / etc/hostname

3: modify selinux:vim / etc/selinux/config, modify SELINUX=disabled, change to disabled

4: turn off the firewall (not necessary, you can configure your own port): systemctl disable firewalld

5: modify the list of hostnames (if not necessary, you can set up an internal DNS): vim / etc/hosts to add the address from server1 to server5

6: restart the server: shutdown-r now

7: the server can be equipped with ssh secret-free login (unnecessary)

8: install docker:yum install docker-y

8.1: enable self-boot: systemctl enable docker

8.2: modify the proxy of docker, otherwise you will not be able to pull the image of K8s

Vim / etc/systemd/system/multi-user.target.wants/docker.service

Add a line:

Environment=HTTP_PROXY= http://10.99.32.2:1080

Refresh system configuration: systemctl daemon-reload, restart docker

9: install kubeadm:

9.1: add the repo:vim / etc/yum.repos.d/kubernetes.repo of kubernetes, and add the following:

[kubernetes] name=Kubernetesbaseurl= https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpgexclude=kube*

9.2: configure the public network proxy of yum: you need to use a proxy, otherwise you can't download it.

Vim / etc/yum.conf add or modify one line:

Proxy= http://yourhost:yourport

Installation: yum install-y kubelet kubeadm kubectl-- disableexcludes=kubernetes

9.4: enable self-boot: systemctl enable kubelet & & systemctl start kubelet

10: configure kubeadm related:

10.1: configure iptable on the master node (not necessary, may have been set up)

Vim / etc/sysctl.d/k8s.conf, add two lines

Net.bridge.bridge-nf-call-ip6tables = 1

Net.bridge.bridge-nf-call-iptables = 1

Sysctl-system

10.2: close all swap:swapoff-a

11: start docker, initialize kubeadm, and sometimes modify cgroup, which is not needed here

Systemctl start docker

Kubeadm init. Initialization is successful when kubeadm join appears.

If you use the calico network plug-in, you need to specify an assigned IP domain:

Kubeadm init-pod-network-cidr=192.168.0.0/16

12: on the configuration management side, you can use the kubectl operation by copying a config file as prompted, and execute it after the operation is successful.

Kubectl get nodes, it is normal if you can see the information, but if you do not see the prompt The connection to the server localhost:8080 was refused-did you specify the right host or port?, you will not configure the config file.

13: install the slave node: like the steps to install the master node, version 1.13 of slave does not need to download some images, just install it, swapoff-a, and then kubeadm join 10.99.32.3 slave 6443-- token euoczm.lhfb8w6ngx98aj3z-- discovery-token-ca-cert-hash sha256:d094ed1b6769f25247e6b1586541f7dbee59272cddb93bb35e054472e40984e4

14: after you have all joined, you can use kubectl get nodes on the master machine to see all the machines, but the NotReady status.

15: install the network: use the calico network plug-in:

Wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

Wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml

Kubectl apply-f rbac-kdd.yaml

Kubectl apply-f calico.yaml

16: check kubectl get nodes and see that all nodes are in ready status.

The second part: build pv

1: use NFS to build PVC storage volumes (dynamic pvc can be built)

1.1: build NFS: install controls on each node: yum-y install nfs-utils rpcbind

Create a shared directory on master: mkdir / nfsdisk

Configure the NFS server: vim / etc/exports, and add the following:

/ nfsdisk 10.99.32.3 (rw,sync,fsid=0,no_root_squash) 10.99.32.10 (rw,sync,fsid=0,no_root_squash) 10.99.32.12 (rw,sync,fsid=0,no_root_squash) 10.99.32.31 (rw,sync,fsid=0,no_root_squash) 10.99.32.32 (rw,sync,fsid=0,no_root_squash)

Ip address is the ip address of the client that needs to read and write to this directory.

1.4enable nfs service self-start and run: systemctl enable nfs & & systemctl start nfs

1.5: refresh share: exportfs-rv. If you see exporting 10.99.32.3:/nfsdisk, the configuration is correct.

1.6. you need to start nfs on each client, otherwise there will be errors in configuring pv

Systemctl enable nfs & & systemctl start nfs

1.7: configure PV and pvc so that a single pvc can be used by multiple deployments

Create a pc.yaml file and add the following:

ApiVersion: v1kind: PersistentVolumemetadata: name: nfs-pvspec: capacity: storage: 150Gi accessModes:-ReadWriteMany persistentVolumeReclaimPolicy: Retain nfs: server: 10.99.32.3 path: / nfsdisk---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: nfs-pvcspec: accessModes:-ReadWriteMany resources: requests: storage: 150Gi volumeName: nfs-pv

1.8Create pv and pvc:kubectl create-f pv.yaml

1.9: check whether it is created successfully: kubectl get pv | | kubectl get pvc

The third part: build each application server, which is mainly divided into two parts:

1: Mount volume (if you need to store some data separately, or database, redis and other application servers that need to store data separately)

2:service configuration: need to provide port mapping for external access and internal access. Simple nodeport mapping is configured here, and sitemesh is used for complex advanced points.

3: after configuring the deployed yaml file, you can use kubectl create-f xxx.yaml directly.

Mysql server:

ApiVersion: v1kind: Servicemetadata: name: mysql-cs labels: app: mysqlspec: type:-name: mysql port: 3306 nodePort: 31718 selector: app: mysql---apiVersion: v1kind: Servicemetadata: name: mysql labels: app: mysqlspec: ports:-name: mysql port: 3306 selector: app: mysql---apiVersion: apps/v1kind: Deploymentmetadata: name: mysqlspec: replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysqlspec: containers:-name: mysql image: mysql:5.7.20 env:-name: MYSQL_ ALLOW_EMPTY_PASSWORD value: "0"-name: MYSQL_ROOT_PASSWORD value: "123456" ports:-name: mysql containerPort: 3306 volumeMounts:-name: data mountPath: / var / lib/mysql subPath: mysql-name: config mountPath: / etc/mysql/conf.d/ resources: requests: cpu: 800m memory: 1Gi limits: Cpu: 1000m memory: 2Gi livenessProbe: exec: command: ["mysqladmin" "- uroot", "- pAwd123456789", "ping"] initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 readinessProbe: exec: command: ["mysql", "- h", "127.0.0.1", "- uroot", "- pAwd123456789", "- e" "SELECT 1"] initialDelaySeconds: 5 periodSeconds: 3 timeoutSeconds: 2 volumes:-name: data persistentVolumeClaim: claimName: nfs-pvc-name: config configMap: name: mysql---apiVersion: V1kind: ConfigMapmetadata: name: mysqldata: my.cnf: | [mysqld] sql_mode= "STRICT_TRANS_TABLES NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION "

Rabbitmq server:

ApiVersion: apps/v1kind: Deploymentmetadata: name: rabbitmqspec: replicas: 1 selector: matchLabels: app: rabbitmq template: metadata: labels: app: rabbitmqspec: containers:-name: rabbitmq image: rabbitmq:3.7.2-management-alpine env: -name: RABBITMQ_DEFAULT_USER value: root-name: RABBITMQ_DEFAULT_PASS value: awd123456789-name: RABBITMQ_DEFAULT_VHOST value: / ports:-name: rabbitmq containerPort: 5672 -name: management containerPort: 15672 volumeMounts:-name: data mountPath: / var/lib/rabbitmq subPath: rabbitmq resources: requests: cpu: 500m memory: 800Mi limits: cpu: 800m memory: 1024Mi volumes:-name: data persistentVolumeClaim: claimName: nfs-pvc---apiVersion: v1kind: Servicemetadata: name: rabbitmq-managerspec: type: NodePort ports:-name: management port: 15672 NodePort: 31717 selector: app: rabbitmq---apiVersion: v1kind: Servicemetadata: name: app: rabbitmqspec: ports:-port: 5672 name: rabbitmq selector: app: rabbitmqredis Server: apiVersion: apps/v1kind: Deploymentmetadata: name: redisspec: replicas: 1 selector: matchLabels: app: redis template: metadata: Labels: app: redis spec: containers:-name: redis image: redis:4.0.6-alpine ports:-name: redis containerPort: 6379 volumeMounts:-name: data mountPath : / data subPath: redis resources: requests: cpu: 500m memory: 800Mi limits: cpu: 800m memory: 1024Mi volumes:- Name: data persistentVolumeClaim: claimName: nfs-pvc---apiVersion: v1kind: Servicemetadata: name: redis-servicespec: NodePort ports:-name: redis port: 6379 nodePort: 31715 selector: app: redis---apiVersion: v1kind: Servicemetadata: name: redis-cs labels: app: redisspec: ports:-port: 3306 Name: redis selector: app: redis

The fourth part is to build our own application.

1: first package your application into an image, and then push it to a public warehouse or a private warehouse

2: configure the yaml file, and use the image packaged by yourself.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report