In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Related content:
Kubernetes deployment (1): architecture and function description
Kubernetes deployment (2): initialization of system environment
Kubernetes deployment (3): CA certificate making
Kubernetes deployment (4): ETCD cluster deployment
Kubernetes deployment (5): Haproxy, Keppalived deployment
Kubernetes deployment (6): Master node deployment
Kubernetes deployment (7): Node node deployment
Kubernetes deployment (8): Flannel network deployment
Kubernetes deployment (IX): CoreDNS, Dashboard, Ingress deployment
Kubernetes deployment (X): stored glusterfs and heketi deployment
Kubernetes deployment (11): managed Helm and Rancher deployment
Kubernetes deployment (12): helm deployment harbor enterprise image repository
The creation of service discovery within CoreDNSkubernetes and the domain name resolution service between pod are implemented through dns, so DNS is very important for kubernets clusters. At present, there are two kinds of dns, one is kube dns, the other is core dns, this time we install Coredns.
All the software and configuration files are saved in the Baidu network disk mentioned in the previous article: Baidu shared link in this article
[root@node-01 k8s] # kubectl create-f coredns/coredns.yaml serviceaccount/coredns createdclusterrole.rbac.authorization.k8s.io/system:coredns createdclusterrolebinding.rbac.authorization.k8s.io/system:coredns createdconfigmap/coredns createddeployment.extensions/coredns createdservice/coredns created [root @ node-01 yaml] # kubectl get pod-n kube-system NAME READY STATUS RESTARTS AGEcoredns-5f94b495b5-58t47 1 Running 0 6mcoredns-5f94b495b5-wvcsg 1/1 Running 0 6m
Then we can enter any pod and go to the ping domain name to see if the dns can be resolved properly.
[root@node-01 yaml] # kubectl get podNAME READY STATUS RESTARTS AGEtomcat-7666b9764-mfgpb 1amp 1 Running 0 11h [root @ node-01 yaml] # kubectl exec-it tomcat-7666b9764-mfgpb-/ bin/sh# ping baidu.comPING baidu.com (220.181.57.216) 56 (84) bytes of data.64 bytes from 220.181.57.216 (220.181.57.216): icmp _ seq=1 ttl=54 time=37.2 ms64 bytes from 220.181.57.216 (220.181.57.216): icmp_seq=2 ttl=54 time=37.0 ms64 bytes from 220.181.57.216 (220.181.57.216): icmp_seq=3 ttl=54 time=36.6 ms64 bytes from 220.181.57.216 (220.181.57.216): icmp_seq=4 ttl=54 time=37.9 Ms ^ C-baidu.com ping statistics-4 packets transmitted 4 received, 0 packet loss Time 3000msrtt min/avg/max/mdev = 36.629 dashboard/ 37.230ax 37.958 ms create Dashboard.Root @ node-01 yaml] # kubectl create-f dashboard/ [root @ node-01 yaml] # kubectl cluster-infoKubernetes master is running at https://10.31.90.200:6443CoreDNS is running at https://10.31.90.200:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxykubernetes-dashboard is running at https://10.31 .90.200: 6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxyTo further debug and diagnose cluster problems Use 'kubectl cluster-info dump'. Visit Dashboard
Https://10.31.90.200:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
User name: admin password: login in token mode selected by admin.
Get Tokenkubectl-nkube-system describe secret $(kubectl-nkube-system get secret | grep admin-user | awk'{print $1}')
After performing the above steps, you can see the dashboard.
Ingress deployment
In Kubernetes, the IP addresses of Service resources and Pod resources can only be used for communication within the cluster network, and all network traffic can not penetrate the Border Router (Edge Router) to realize the communication inside and outside the cluster. Although external traffic can be introduced through nodes using NodePort or LoadBalancer types for Service, it is still layer 4 traffic forwarding, and the available load balancer is also a transport layer load balancing mechanism.
Ingress is one of the standard resource types of Kubernetes API. It is actually a set of rules that forward requests to specified Service resources based on DNS name (host) or URL path. It is used to forward request traffic outside the cluster to the internal cluster to complete service publishing. However, the Ingress resource itself cannot perform "traffic traversal". It is only a collection of routing rules that need the assistance of other functions in order to really work, such as listening to a socket and then routing request traffic according to the matching mechanism of these rules. This component that listens to sockets and forwards traffic for Ingress resources is called an Ingress controller (Ingress Controller).
The Ingress controller can be implemented by any service program with reverse proxy (HTTP/HTTPS) function, such as Nginx, Envoy, HAProxy, Vulcand and Traefik. The Ingress controller itself is also a Pod resource object running in the cluster, and it runs in the same network as the proxied application running as Pod resource, as shown in the relationship between ingress-nginx and pod1, pod3 and so on. On the other hand, when using Ingress resources for traffic distribution, the Ingress controller can forward the client's request traffic directly to the back-end Pod resources corresponding to the Service based on the rules defined by the Service resources. this forwarding mechanism bypasses the Service resources and saves the port proxy overhead implemented by kube-proxy. As shown in the figure above, the Ingress rule needs to be assisted by a Service resource object to identify all related Pod objects, but the ingress-nginx controller can schedule request traffic to pod3 or pod4 directly through the definition of the api.ilinux.io rule without having to be forwarded again by the Service object API. WAP-related rules work in the same way. First of all, it is important to note that this time we are deploying ingress v0.21.0, and there is no default backend in the latest v0.21.0 calendar.
Create Ingress Controller
You can download the official mandatory.yaml to install locally.
[root@node-01 ingress] # kubectl create-f mandatory.yaml
Or
Kubectl apply-f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
Since the official ingress is only a pod and does not expose IP and ports, we need to create an exposed service for ingress, exposing nodePort 20080 and 20443 ports. For those who want to deploy in a production environment, you can deploy ingress controller separately with 2 node servers, and then expose ports 80 and 443.
ApiVersion: v1kind: Servicemetadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginxspec: type: NodePort ports:-name: http nodePort: 20080 port: 80 targetPort: 80 protocol: TCP-name: https nodePort: 20443 port: 443targetPort: 443protocol: TCP selector: app.kubernetes.io/name: ingress-nginx App.kubernetes.io/part-of: ingress-nginx
Then add three ports 20080 and 20443 of node to the backend of haproxy. For those who don't understand my network architecture, please see here, and then parse the A record of www.cnlinux.club to 10.31.90.200.
Listen ingress-80 bind 10.31.90.200:80 mode tcp balance roundrobin timeout server 15s timeout connect 15s server apiserver01 10.31.90.204:20080 check port 20080 inter 5000 fall 5 server apiserver02 10.31.90.205:20080 check port 20080 inter 5000 fall 5 server apiserver03 10.31.90.206:20080 check port 20080 inter 5000 fall 5listen ingress-443 bind 10.31.90.200:443 Mode tcp balance roundrobin timeout server 15s timeout connect 15s server apiserver01 10.31.90.204server apiserver01 20443 check port 20443 inter 5000 fall 5 server apiserver02 10.31.90.205 server apiserver01 20443 check port 20443 inter 5000 fall 5 server apiserver03 10.31.90.206 check port 20443 inter 5000 fall 5 create a test tomcat demoroot @ node-01 yaml] # kubectl create-f tomcat-demo.yaml apiVersion: apps/v1kind: Deploymentmetadata: name: tomcat Labels: app: tomcatspec: replicas: 1 selector: matchLabels: app: tomcat template: metadata: labels: app: tomcatspec: containers:-name: tomcat image: tomcat:latest ports:-containerPort: 8080---apiVersion: v1kind: Servicemetadata: name: tomcatspec: selector: app: tomcat ports:-name: tomcat protocol: TCP port: 8080 targetPort: 8080 type : ClusterIP create ingressapiVersion: extensions/v1beta1kind: Ingressmetadata: name: tomcat annotations: nginx.ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.class: nginxspec: rules:-host: www.cnlinux.club http: paths:-path: backend: serviceName: tomcat servicePort: 8080
At this point, the ingress is created. Open www.cnlinux.club in the browser and you can see the page of tomcat.
All the documents related to K8s will be updated one after another. If you think I have written well, I hope you will pay more attention to it. Thank you very much!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.