In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Advanced ingress-nginx of Kubernetes
Table of contents:
First, the best way to access the application from the outside
Configuration management
Three data volumes and data persistence volumes
Fourth, talk about stateful application deployment again.
Five K8S security mechanism
In the previous words, if you choose nodeport to expose the port, you need to determine whether the exposed port is occupied, and then create a new application to determine whether the port has been assigned. Nodeport itself is based on the default iptables proxy mode for network forwarding, that is, SANT,DANT, based on layer 4, layer 7 cannot be done, and the performance is poor, because it requires firewall forwarding and filtering.
First, the best way to access applications from the outside
The relationship between Pod and Ingress
Associate by Service
Load balancing of Pod through Ingress Controller supports TCP/UDP layer 4 and HTTP layer 7
Ingress Controller
Controller is similar to the K8s component installed, often go to api to interact, often to get api-related information, refresh their own rules, similar to other controllers
Ingress,k8s designed a more global load balancer. To be exact, Ingress is a rule in k8s. This controller is used to implement this rule, which is generally called ingress controller.
The main job of the ingress controller is that it accesses the controller, and the specific pod that it forwards for you, that is, the cluster pool, which applications are associated, which IP of pod, will help you associate the exposed port 80443.
1. Deploy Ingress Controlle
Deployment documentation: https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md creates Ingress rules, exposes a port and domain name for your application, and allows users to access the ingress controller controller
3. Controller selection Typ
Https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
Note:
Modify the image address to domestic: zhaocheng172/nginx-ingress-controller:0.20.0
Use the host network: hostNetwork: true [root@k8s-master demo] # wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml[root@k8s-master demo] # kubectl apply-f mandatory.yaml [root@k8s-master demo] # kubectl get pod-n ingress-nginxNAME READY STATUS RESTARTS AGEnginx-ingress-controller-5654f58c87-r5vcq 1and1 Running 0 46s
Assigned to node2, we can use netstat to check the port we are listening to.
[root@k8s-master demo] # kubectl get pod-n ingress-nginx-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-ingress-controller-5654f58c87-r5vcq 1 Running 0 3m51s 192.168.30.23 k8s-node2 [root@k8s-master demo] # vim ingress.yaml apiVersion: Extensions/v1beta1kind: Ingressmetadata:name: example-ingressspec:rules:- host: www.dagouzi.comhttp: paths:-backend: serviceName: deployment-service servicePort: 80 [root@k8s-master demo] # kubectl create-f ingress.yaml [root@k8s-master demo] # kubectl get ingress-o wideNAME HOSTS ADDRESS PORTS AGEexample-ingress www.dagouzi.com 80 49m
Test access, here I wrote into my hosts file, if you do domain name resolution, it is also parsing our ingress IP
For this type, we can only assign ingress-nginx to one node, and if our ingress-nginx fails, we will definitely not be able to access our application services.
If we solve this problem, we can expand the capacity of the replica, and use the form of DaemonSet to enable our nodes to set up a pod to delete the replica, because there is no need for the replica here.
The previous resource needs to be deleted before it can be modified
[root@k8s-master demo] # kubectl delete-f mandatory.yaml [root@k8s-master demo] # kubectl get pod-n ingress-nginx-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-ingress-controller-4s5ck 1 Running 0 38s 192.168.30.22 k8s-node1 nginx-ingress-controller-85rlq 1/1 Running 0 38s 192.168.30.23 k8s-node2
Check our listening port, node1/node2, all of which are available. However, such an instance is more suitable for small clusters.
Generally speaking, we can also run two more layer-4-based load balancers in front of this DaemonSet controller.
User-- > lb (vm-nginx/lvs/haproxy)-- > IP of node1/node2, then use the algorithm to poll,-> pod
[root@k8s-node1] # netstat-anpt | grep 80tcp 0 0 0. 0. 0. 0. 0. 0. 0. 0 LISTEN 63219/nginx: master tcp 0 0 0. 0 0. 0. 0. 0 of the 80 0. 0. 0. 0 of the LISTEN 63219/nginx. * LISTEN 63219/nginx: master tcp 0 0 127.0. 1TIME_WAIT 33680 127.0.0.1 TIME_WAIT-tcp 00 127.0.1 TIME_WAIT-tcp 00 127.0.1 TIME_WAIT 33696 127.0.1 TIME_WAIT -tcp 0 0 127.0.1 tcp 33690 127.0.1 TIME_WAIT-tcp 0 0127.0.1 TIME_WAIT-tcp 0 0127.0.0. 1JV 33670 127.0.1 TIME_WAIT-tcp 0 127.0.1 TIME_WAIT-tcp 33660 127.0.1 TIME_WAIT -tcp 0 0 127.0.1 tcp 33666 127.0.1 TIME_WAIT-tcp 0 0127.0.1 TIME_WAIT-tcp 00127.0.0. 1LISTEN 63219/nginx 33656 127.0.0.1 TIME_WAIT-tcp6 0 0: 18080:: * LISTEN 63219/nginx: master tcp6 0 0: 80:: * LISTEN 63219/nginx: master [ Root@k8s-node1 ~] # netstat-anpt | grep 443tcp 0 00.0.0.0 grep 443tcp 443 0.0.0.0 LISTEN 63219/nginx: master tcp 0 192.168.30.22 LISTEN 63219/nginx 34798 192.168.30.21 root@k8s-node1 6443 ESTABLISHED 1992/kube-proxy tcp 0192.168.30.22 grep 443tcp 44344 10.1 0.1 ESTABLISHED 6556/flanneld tcp 443 ESTABLISHED 6556/flanneld tcp 0 192.168.30.22 ESTABLISHED 63193/nginx-ingress tcp6 44872 192.168.30.21 ESTABLISHED 63193/nginx-ingress tcp6 6443 0192.168.30.22 ESTABLISHED 63193/nginx-ingress tcp6 58774 10.1.0.1 ESTABLISHED 63193/nginx-ingress tcp6 00: * LISTEN 63219/nginx: master
Access based on https form
[root@k8s-master cert] # cat cfssl.shcurl-L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64-o / usr/local/bin/cfsslcurl-L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64-o / usr/local/bin/cfssljsoncurl-L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64-o / usr/local/bin/cfssl-certinfochmod + x / usr/local/bin/cfssl / Usr/local/bin/cfssljson / usr/local/bin/cfssl-certinfo [root@k8s-master cert] # sh cfssl.sh [root@k8s-master cert] # lscerts.sh cfssl.sh [root@k8s-master cert] # chmod + x certs.sh [root@k8s-master cert] # sh certs.sh
Generate a certificate for our domain name, a key, a pem
[root@k8s-master cert] # lsblog.ctnrs.com.csr blog.ctnrs.com-key.pem ca-config.json ca-csr.json ca.pem cfssl.shblog.ctnrs.com-csr.json blog.ctnrs.com.pem ca.csr ca-key.pem certs.sh
Put our key into our K8s and use this key when using ingress
[root@k8s-master cert] # kubectl create secret tls blog-ctnrs-com-- cert=blog.ctnrs.com.pem-- key=blog.ctnrs.com-key.pem [root@k8s-master cert] # kubectl get secretNAME TYPE DATA AGEblog-ctnrs-com kubernetes.io/tls 2 3m1sdefault-token-m6b7h kubernetes.io/service-account-token 3 9d [root@k8s-master demo] # vim ingress-https.yamlapiVersion: extensions/v1beta1kind: Ingressmetadata: name: tls-example-ingressspec: tls:-hosts:-blog.ctnrs.com secretName: blog-ctnrs-com rules:-host: blog.ctnrs.com http: paths:-path: / backend: serviceName: deployment-service servicePort: 80 [root@k8s-master Demo] # kubectl create-f ingress-https.yaml ingress.extensions/tls-example-ingress created [root@k8s-master demo] # kubectl get ingressNAME HOSTS ADDRESS PORTS AGEexample-ingress www.dagouzi.com 80 3h36mtls-example-ingress blog.ctnrs.com 80 443 5s
The hint here is not safe, because we authenticate with a self-signed certificate, and if we replace the certificate we bought, we can access it normally.
Summary:
Two ways to expose external access
User-> lb (external load balancer + keepalived)-> ingress controller (node1/node2)-> pod
User-- "node (vip ingress controller+keepalived active / standby)-- > pod
Ingress (http/https)-- > service-> pod
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.