Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Ingress-nginx principle and configuration of K8s

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Preface

In Kubernetes, the IP addresses of services and Pod can only be used within the cluster network, and are not visible to applications outside the cluster. In order to enable external applications to access services within the cluster, the following solutions are currently provided in Kubernetes:

NodePortLoadBalancerIngress

NodePort was introduced in the previous blog post. To put it simply, it provides a unified access interface for the back-end pod through the resource object such as service, and then maps the unified access interface of service to the cluster node, and finally implements client to access the services provided by the back-end pod through the mapping to the port on the cluster node.

However, there is a drawback in this approach, that is, when a new pod service is generated, you need to create a corresponding service to map it to the node port. When there are too many pod running, the ports exposed to the client side of our node will also increase, so the risk factor of our entire K8s cluster will increase, because when we are building the cluster, officials have made it clear that the firewalld firewall must be turned off and the iptables rules must be emptied. Now we have exposed so many ports to client, the safety factor can be imagined.

Is there a safer and easier way? The answer is yes, which is to make use of Ingress as a resource object to implement it.

Blog outline:

1. Introduction of Ingress-nginx

II. Ingress-nginx configuration example

3. Configure HTTPS

1. Ingress-nginx introduction 1. Ingress-nginx constitutes ingress-nginx-controller: according to the ingress rules written by the user (the yaml file of the created ingress), dynamically change the configuration file of the nginx service, and reload overloading makes it effective (it is automated and implemented through lua scripts) Ingress resource object: abstract the configuration of Nginx into an Ingress object, each time you add a new Service resource object, you only need to write a new yaml file of Ingress rules (or modify the yaml file of existing ingress rules) 1. What problems can Ingress-nginx solve?

1) dynamically configure services

If we follow the traditional way, when we add a new service, we may need to add a reverse proxy at the traffic entrance to point to our new K8s service. If you use Ingress-nginx, you only need to configure the service, and when the service starts, it will automatically register in Ingress without any additional operation.

2) reduce unnecessary port mapping

Anyone who has configured K8s knows that the first step is to turn off the firewall, mainly because many services of K8s will be mapped out in NodePort, which is tantamount to drilling a lot of holes into the host, which is neither safe nor elegant. While Ingress can avoid this problem, except that Ingress's own services may need to be mapped out, other services should not use NodePort mode.

2. Working principle of Ingress-nginx

1) ingress controller dynamically perceives the changes of ingress rules in the cluster by interacting with kubernetes api.

2) then read it. According to the custom rules, the rule is to specify which domain name corresponds to which service, and generate a nginx configuration.

3) write to the pod of nginx-ingress-controller, where a Nginx service is running in the pod of the Ingress controller, and the controller will write the generated nginx configuration into the / etc/nginx.conf file

4) then reload to make the configuration effective. In order to achieve the problem of domain name configuration and dynamic update respectively.

II. Ingress-nginx configuration example

Note: all the yaml files and images in this article can be downloaded through my network disk link (except httpd and tomcat).

1. Build a registry private repository (you can skip the configuration of the private warehouse and choose to manually import the desired image to the corresponding node) # run the registry private warehouse [root@master ~] # docker run-tid-- name registry-p 5000 docker run 5000-- restart always registry [root@master ~] # vim / usr/lib/systemd/system/docker.service # modify the configuration file Specify private repository ExecStart=/usr/bin/dockerd-H unix://-- insecure-registry 192.168.20.6 ExecStart=/usr/bin/dockerd 500 1.6KB/s to send the modified file to other nodes in the k8s cluster [root@master ~] # scp / usr/lib/systemd/system/docker.service root@node01:/usr/lib/systemd/system/docker.service 100% 1637 1.6KB/s 00:00 [root@master ~] # scp / usr/lib/systemd/system/docker.service root@node02:/usr/lib/systemd/system/# executes the following command on each node, including master Restart docker to make the changes take effect [root@master ~] # systemctl daemon-reload [root@master ~] # systemctl restart docker# upload the images needed for testing to the private repository [root@master ~] # docker push 192.168.20.6:5000/httpd:v1 [root@master ~] # docker push 192.168.20.6:5000/tomcat:v1 2 (you can also skip it, or use the default default namespace. However, you need to delete all the configuration fields about custom namespaces in the following yaml file) [root@master ~] # kubectl create ns test-ns # create namespace test-ns [root@master ~] # kubectl get ns # confirm that the creation is successful 3, create Deployment, Service Resource object 1) create a httpd service and its service [root @ master test] # vim httpd-01.yaml # write resource objects based on httpd services kind: DeploymentapiVersion: extensions/v1beta1metadata: name: test-nsspec: replicas: 3 template: metadata: labels: app: httpd01 spec: containers:-name: httpd image: 192.168.20.6:5000/httpd:v1---apiVersion : v1kind: Servicemetadata: name: httpd-svc namespace: test-nsspec: selector: app:-protocol: TCP port: 80 targetPort: 80 [root@master test] # kubectl apply-f httpd-01.yaml # execute yaml file 2) create tomcat service and its service [root @ master test] # vim tomcat-01.yaml # write yaml file as follows: kind: DeploymentapiVersion: extensions/v1beta1metadata: name: web02 namespace: test-nsspec: replicas: 3 template : metadata: labels: app: tomcat01 spec: containers:-name: tomcat image: v1kind: Servicemetadata: name: tomcat-svc namespace: test-nsspec: selector: app: tomcat01 ports:-protocol: TCP port: 8080 targetPort: 8080 [root@master test] # kubectl apply-f tomcat-01.yaml # execute yaml file 3) Confirm the successful creation of the above resource object [root@master test] # kubectl get po-n test-ns # make sure that pod is running normally NAME READY STATUS RESTARTS AGEweb01-757cfc547d-fmjnt 1 8m24sweb01 1 Running 0 8m24sweb01-757cfc547d-pjrrt 1 Running 0 9m30sweb01-757cfc547d-v7tdb 1 Running 0 8m24sweb02-57c46c759d-l9qzx 1 / 1 Running 0 4m9sweb02-57c46c759d-vs6mg 1 57c46c759d-zknrw 1 Running 0 4m9sweb02-57c46c759d-zknrw 1 Running 0 4m9s [root@master test] # kubectl get svc-n test-ns # confirm that SVC created NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEhttpd-svc ClusterIP 10.107.211.219 80/TCP 10mtomcat-svc ClusterIP 10.101.159.1 8080/TCP 5m8s# accesses the clusterIP+ port of SVC Confirm that you can access the backend Pod [root@master test] # curl-I 10.101.159.1 tomcatHTTP/1.1 8080 # access tomcatHTTP/1.1 200 # return status code is 200Content-Type: text/html Charset=UTF-8Transfer-Encoding: chunkedDate: Fri, 22 Nov 2019 12:34:32 GMT [root@master test] # curl-I 10.107.211.219 Nov 80 # visit httpdHTTP/1.1 200 OK # status code is 200Date: Fri, 22 Nov 2019 12:34:39 GMTServer: Apache/2.4.41 (Unix) # version number also has Last-Modified: Sat 16 Nov 2019 10:00:39 GMTETag: "1a-59773c95e7fc0" Accept-Ranges: bytesContent-Length: 26Content-Type: text/html# if in the above access test The corresponding pod has not been accessed. It is recommended to use the "kubectl describe svc" command, # to see if there is an associated backend pod in the Endpoints column in the corresponding service. 4. Create an Ingress-nginx resource object

Download the image I provided and import it to the node where you need to run Ingress-nginx, or you can choose to download other images yourself.

Method 1: go to gitlab to search for Ingress-nginx, click "deploy", and then click the jump link at the bottom of the page to see the following command:

# do not copy the command directly to the terminal for execution, first download the yaml file [root@master test] # wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml[root@master test] # vim mandatory.yaml # modify its yaml file # modify the following content spec: # this is the spec field hostNetwork: true # add this line on line 212 Indicates that the tag selector of the node is set using the host network # wait up to five minutes for the drain of connections terminationGracePeriodSeconds: 300 serviceAccountName: nginx-ingress-serviceaccount nodeSelector: Ingress: nginx # to specify which node to run containers:-name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1# If you need to change the image name, you can change it here. I will keep it by default. # after modification, save and exit [root@master test] # kubectl label nodes node01 Ingress=nginx # tag the node01 node accordingly, so that you can specify that Ingress-nginx runs on node01# and execute the following command to check whether the node01 tag exists [root@master test] # kubectl get nodes node01-- show-labels # execute the following command on the node01 node to import the Ingress-nginx image (upload this package by yourself, which is found in the link to the network disk I provided) [root@node01 ~] # docker load

< nginx-ingress-controller.0.26.1.tar #手动将ingress-nginx镜像导入到node01节点#回到master节点,执行yaml文件[root@master test]# kubectl apply -f mandatory.yaml #执行ingress-nginx的yaml文件 关于上面yaml文件中写入的"hostNetwork: true"具体解释:如果使用此网络参数,那么pod中运行的应用程序可以直接使用Node节点端口,这样node节点主机所在的网络的其他主机,都可以通过访问到此应用程序。 确定Ingress-nginx的容器运行正常:[root@master test]# kubectl get pod -n ingress-nginx -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-ingress-controller-77c8f6577b-6shdc 1/1 Running 0 107s 192.168.20.7 node01 5、定义Ingress规则(编写ingress的yaml文件)[root@master test]# vim ingress.yaml #编写yaml文件如下apiVersion: extensions/v1beta1kind: Ingressmetadata: name: test-ingress namespace: test-ns annotations: nginx.ingress.kubernetes.io/rewrite-target: /spec: rules: - host: www.test01.com http: paths: - path: / backend: serviceName: httpd-svc servicePort: 80 - path: /tomcat backend: serviceName: tomcat-svc servicePort: 8080[root@master test]# kubectl apply -f ingress.yaml #执行ingress规则的yaml文件[root@master test]# kubectl get ingresses -n test-ns #查看ingresses规则资源对象NAME HOSTS ADDRESS PORTS AGEtest-ingress www.test01.com 80 28s 其实,至此已经实现了我们想要的功能,现在就可以通过www.test01.com 来访问到我们后端httpd容器提供的服务,通过www.test01.com/tomcat 来访问我们后端tomcat提供的服务,当然,前提是自行配置DNS解析,或者直接修改client的hosts文件。访问页面如下(注意:一定要自己解决域名解析的问题,若不知道域名对应的是哪个IP,请跳过这两个图,看下面的文字解释): 访问httpd服务(首页内容是我自定义的): 访问tomcat服务: 在上面的访问测试中,虽然访问到了对应的服务,但是有一个弊端,就是在做DNS解析的时候,只能指定Ingress-nginx容器所在的节点IP。而指定k8s集群内部的其他节点IP(包括master)都是不可以访问到的,如果这个节点一旦宕机,Ingress-nginx容器被转移到其他节点上运行(不考虑节点标签的问题,其实保持Ingress-nginx的yaml文件中默认的标签的话,那么每个节点都是有那个标签的)。随之还要我们手动去更改DNS解析的IP(要更改为Ingress-nginx容器所在节点的IP,通过命令"kubectl get pod -n ingress-nginx -o wide"可以查看到其所在节点),很是麻烦。 有没有更简单的一种方法呢?答案是肯定的,就是我们为Ingress-nginx规则再创建一个类型为nodePort的Service,这样,在配置DNS解析时,就可以使用www.test01.com 绑定所有node节点,包括master节点的IP了,很是灵活。 6、为Ingress规则创建一个Service 就在刚才找到Ingress-nginx的yaml文件的页面,然后下拉页面,即可看到以下,可以根据k8s集群环境来选择适合自己的yaml文件,假如自己是在Azure云平台搭建的K8s集群,则选择复制Azure下面的命令即可,我这里是自己的测试环境,所以选择Bare-metal下面的yaml文件: 创建这个Service有两种方法,一是直接复制其web页面的命令到master节点上执行,二是将其链接地址复制到终端使用wget下载下来再执行,我选择第二种方法,因为我想看看里面的内容: #将其下载到本地[root@master test]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml[root@master test]# cat service-nodeport.yaml #仅仅是查看一下内容,并不需要修改apiVersion: v1kind: Servicemetadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginxspec: type: NodePort ports: - name: http port: 80 targetPort: 80 protocol: TCP - name: https port: 443 targetPort: 443 protocol: TCP selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx---[root@master test]# kubectl apply -f service-nodeport.yaml #执行下载下来的yaml文件[root@master test]# kubectl get svc -n ingress-nginx #查看运行的serviceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEingress-nginx NodePort 10.109.106.246 80:30465/TCP,443:32432/TCP 56s#可以看到service分别将80和443端口映射到了节点的30645和32432端口(随机映射的,也可以修改yaml文件指定端口)。 至此,这个www.test01.com 的域名即可和群集中任意节点的30465/32432端口进行绑定了。 测试如下(域名解析对应的IP可以是k8s群集内的任意节点IP):

At this point, the initial requirements have been realized.

Ingress rules based on Virtual Host

If this is another requirement, and I need to map both www.test01.com and www.test02.com to the services provided by my backend httpd container, how should I configure them at this time?

The configuration is as follows (based on the above environment): [root@master test] # vim ingress.yaml # the yaml file for modifying ingress rules is as follows: apiVersion: extensions/v1beta1kind: Ingressmetadata: name: test-ingress namespace: test-ns annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules:-host: www.test02.com # add this host configuration http: paths:-path: / backend: serviceName: httpd-svc # bind to the same service name as www.test01 servicePort: 80-host: www.test01.com http: paths:-path: / backend: serviceName: httpd-svc servicePort: 80-path: / tomcat backend: serviceName: tomcat-svc servicePort : 808 add the above host field to save and exit, and then [root@master test] # kubectl apply-f ingress.yaml # re-execute the yaml file

At this point, you can access both www.test01.com and www.test02.com to the page provided by the backend httpd (solve the domain name resolution problem by yourself), as follows:

Visit www.test01.com:30465

Visit www.test02.com:30465

Summarize how the pod of the above example can be accessed by client step by step, as follows:

Backend pod=== "service====" ingress rule = = "write to the Ingress-nginx-controller configuration file and automatically overload to make the changes take effect = =" create a service==== for Ingress-nginx "to enable client to access the backend pod no matter which K8 node's IP+ port

3. Configure HTTPS

In the above operation, we have implemented the use of ingress-nginx to provide a unified entry for all backend pod, so there is a very serious problem to consider, that is, how to configure CA certificates for our pod to achieve HTTPS access? Configure CA directly in pod? How many repetitive operations are required? Moreover, pod can be killed and re-created by kubelet at any time. Of course, there are many solutions to these problems, such as configuring CA directly into the image, but this requires a lot of CA certificates.

Here is an easier way, for example, in the case above, there are multiple pod,pod associated with service at the back end, and the service is discovered by ingress rules and dynamically written into the ingress-nginx-controller container, and then a Service mapping port on the cluster node is created for ingress-nginx-controller to be accessed by client.

In the above series of processes, the key point is the ingress rules. We only need to configure the CA certificate for the domain name in the yaml file of ingress. As long as the domain name can be accessed through HTTPS, as for how the domain name is associated with the pod that provides services at the back end, this is the communication within the K8s cluster. Even using http to communicate is harmless.

The configuration is as follows:

The next configuration has little to do with the above configuration, but since the Ingress-nginx-controller container is already running, there is no need to run it here. You only need to configure pod, service, and ingress rules.

# create CA certificate (test environment, create it yourself) [root@master https] # openssl req-x509-sha256-nodes-days 365-newkey rsa:2048-keyout tls.key-out tls.crt-subj "/ CN=nginxsvc/O=nginxsvc" # two files will be generated in the current directory As follows: [root@master https] # ls # make sure that these two files exist in the current directory. Tls.crt tls.key# stores the generated CA certificate in etcd [root @ master https] # kubectl create secret tls tls-secret-- key=tls.key-- cert tls.crt# to create deploy, service, Ingress Resource object [root@master https] # vim httpd03.yaml # write yaml file kind: DeploymentapiVersion: extensions/v1beta1metadata: name: web03spec: replicas: 2 template: metadata: labels: app: httpd03 spec: containers:-name: httpd3 image: 192.168.20.6:5000/httpd:v1---apiVersion: v1kind: Servicemetadata: name: httpd-svc3spec: selector: app: httpd03 ports: -protocol: TCP port: 80 targetPort: 80---apiVersion: extensions/v1beta1kind: Ingressmetadata: name: test-ingress3spec: tls:-hosts:-www.test03.com secretName: tls-secret # here is specified is the name of the CA certificate stored in etcd rules:-host: www.test03.com http: paths:-path: / backend: serviceName: httpd-svc3 ServicePort: 80 [root@master https] # kubectl apply-f httpd03.yaml # execute yaml file

Verify that the created resource object is working properly:

[root@master https] # kubectl get svc # View svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEhttpd-svc3 ClusterIP 10.98.180.104 80/TCP 31skubernetes ClusterIP 10.96.0.1 443/TCP 19d [root@master https] # kubectl get pod # View podNAME READY STATUS RESTARTS AGEweb03-66dfbc8cf- W6vvp 1 Running 0 34sweb03-66dfbc8cf-zgxd7 1 Running 0 34s [root@master https] # kubectl describe ingresses # View ingress rules Name: test-ingress3Namespace: defaultAddress: 10.109.106.246Default backend: default-http-backend:80 () TLS: tls-secret terminates www.test03.comRules: Host Path Backends -www.test03.com / httpd-svc3:80 (10.244.1.13 pod 80, 10.244.2.9) # identify the pod associated with the corresponding service and the backend

Https access Test:

Use https://www.test03.com for access (solve the problem of domain name resolution by yourself)

-this is the end of this article. Thank you for reading-

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report