Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to analyze the Ingress meaning and deployment of Service in K8S

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article introduces how to analyze the Ingress meaning and deployment of Service in K8S. The content is very detailed. Interested friends can use it for reference. I hope it will be helpful to you.

Preface

Ingress can be understood as the Service of Service, that is, a layer of Service is built in front of the existing Service as a unified entry point for external traffic to forward the request route.

To put it bluntly, it is to build a nginx or haproxy at the front end, forward different host or url to the corresponding backend Service, and then transfer the Service to Pod. It's just that ingress does some decoupling and abstraction of nginx/haproxy.

The significance of Ingress

Ingress makes up for some shortcomings when the default Service exposes public network access, such as the failure to implement the layer-7 URL rule at the unified entrance, for example, a default Service can only correspond to one back-end service.

Generally speaking, ingress contains two parts: ingress-controller and ingress object.

Ingress-controller corresponds to the nginx/haproxy program and runs as Pod.

The ingress object corresponds to the nginx/haproxy configuration file.

Ingress-controller uses the information described in the ingress object to modify the rules of nginx/haproxy in its own Pod.

Deploy ingress

Prepare test resources

Deploy 2 services, access service 1, return Version 1 to access service 2, return Version 2

Program configuration of two services

# cat deployment.yaml apiVersion: apps/v1kind: Deploymentmetadata: name: hello-v1.0spec: selector: matchLabels: app: v1.0 replicas: 3 template: metadata: labels: app: v1.0spec: containers:-name: hello-v1 image: anjia0532/google-samples.hello-app:1.0 ports:-containerPort: 8080---apiVersion: apps/v1kind: Deploymentmetadata: Name: hello-v2.0spec: selector: matchLabels: app: v2.0 replicas: 3 template: metadata: labels: app: v2.0spec: containers:-name: hello-v2 image: anjia0532/google-samples.hello-app:2.0 ports:-containerPort: 8080---apiVersion: v1kind: Servicemetadata: name: service-v1spec: selector: app: v1.0 Ports:-port: 8081 targetPort: 8080 protocol: TCP---apiVersion: v1kind: Servicemetadata: name: service-v2spec: selector: app: v2.0 ports:-port: 8081 targetPort: 8080 protocol: TCP

Let the container run on 8080 and service run on 8081.

Start two services and the corresponding Pod

# kubectl apply-f deployment.yaml deployment.apps/hello-v1.0 createddeployment.apps/hello-v2.0 createdservice/service-v1 createdservice/service-v2 created

Check the startup status, and each service corresponds to 3 Pod

# kubectl get pod Service-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESpod/hello-v1.0-6594bd8499-lt6nn 1 work01 pod/hello-v1.0 1 Running 0 37s 192.10.205.234 work01 pod/hello-v1.0-6594bd8499-q58cw 1 Running 037s 192.10.137.190 work03 Pod/hello-v1.0-6594bd8499-zcmf4 1 6bd99fb9cd-pnhr8 1 Running 0 37s 192.10.137.189 work03 pod/hello-v2.0-6bd99fb9cd-9wr65 1 Running 037s 192.10.75.89 work02 pod/hello-v2.0-6bd99fb9cd-pnhr8 1 Running 037s 192.10.75.91 Work02 pod/hello-v2.0-6bd99fb9cd-sx949 1 Running 0 37s 192.10.205.236 work01 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE SELECTORservice/service-v1 ClusterIP 192.20.92.221 8081/TCP 37s app=v1.0service/service-v2 ClusterIP 192.20.255 . 0 8081/TCP 36s app=v2.0

Check the mount of Service backend Pod

[root@master01 ~] # kubectl get ep service-v1NAME ENDPOINTS AGEservice-v1 192.10.137.189 root@master01 8080192.137.190purl 8080192.205.2341980 113s [root@master01 ~] # kubectl get ep service-v2NAME ENDPOINTS AGEservice-v2 192.10.205.236:8080192.10.75.89:8080192.10.75.91:8080 113s

You can see that the corresponding Pod is mounted successfully for both services.

Let's deploy the front-end ingress-controller.

First specify work01/work02 two servers to run ingress-controller

Kubectl label nodes work01 ingress-ready=truekubectl label nodes work02 ingress-ready=true

Ingress-controller uses the official nginx version

Wget-O ingress-controller.yaml https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml

Modified to start 2 ingress-controller

# vim ingress-controller.yamlapiVersion: apps/v1kind: Deployment . . RevisionHistoryLimit: 10 replicas: 2 # add this line

Modify to domestic mirror image

# vim ingress-controller.yaml spec: dnsPolicy: ClusterFirst containers:-name: controller # image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1@sha256:0e072dddd1f7f8fc8909a2ca6f65e76c5f0d2fcfb8be47935ae3457e8bbceb20 image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.32.0 imagePullPolicy: IfNotPresent

Deploy ingress-controller

Kubectl apply-f ingress-controller.yaml

Check the operation status

# kubectl get pod Service-n ingress-nginx- o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESpod/ingress-nginx-admission-create-ld4nt 0 Completed 1 Completed 0 15m 192.10.137.188 work03 pod/ingress-nginx-admission-patch-p5jmd 0 Completed 1 15m 192.10.75.85 work02 pod/ingress-nginx-controller-75f89c4965-vxt4d 1/1 Running 0 15m 192.10.205.233 work01 pod/ingress-nginx-controller-75f89c4965-zmjg2 1/1 Running 0 15m 192.10.75.87 work02 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE SELECTORservice/ingress-nginx-controller NodePort 192.20.105.10 192.168.10.17192.168.10.17 80:30698/TCP 443:31303/TCP 15m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginxservice/ingress-nginx-controller-admission ClusterIP 192.20.80.208 443/TCP 15m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

You can see that ingress-nginx-controller Pod is running on work01/02.

Write access request forwarding rules

# cat ingress.yaml apiVersion: networking.k8s.io/v1beta1kind: Ingressmetadata: name: nginx-ingressspec: rules:-host: test-v1.com http: paths:-path: / backend: serviceName: service-v1 servicePort: 8081-host: test-v2.com http: paths:-path: / backend: serviceName: service-v2 servicePort: 8081

Enable Rul

# kubectl apply-f ingress.yamlingress.networking.k8s.io/nginx-ingress created

You can see that the nginx configuration in ingress-controller Pod has taken effect.

# kubectl exec ingress-nginx-controller-75f89c4965-vxt4d-n ingress-nginx--cat / etc/nginx/nginx.conf | grep-A 30 test-v1.com server {server_name test-v1.com; listen 80; listen 443 ssl http2; set $proxy_upstream_name "-" Ssl_certificate_by_lua_block {certificate.call ()} location / {set $namespace "default"; set $ingress_name "nginx-ingress" Set $service_name "service-v1"; set $service_port "8081"; set $location_path "/"

We access the test outside the cluster.

First resolve the domain name to work01

# cat / etc/hosts192.168.10.15 test-v1.com192.168.10.15 test-v2.com

Access test

# curl test-v1.comHello, Worldwide version: 1.0.0Hostname: hello-v1.0-6594bd8499-svjnf# curl test-v1.comHello, Worldwide version: 1.0.0Hostname: hello-v1.0-6594bd8499-zqjtm# curl test-v1.comHello, Worldwide version: 1.0.0Hostname: hello-v1.0-6594bd8499-www76# curl test-v2.comHello, Worldwide version: 2.0.0Hostname: hello-v2.0-6bd99fb9cd-h8862# curl test-v2.comHello Worldview version: 2.0.0Hostname: hello-v2.0-6bd99fb9cd-sn84j

You can see that the requests for different domain names go to different Pod under the correct Service.

Request work02 again

# cat / etc/hosts192.168.10.16 test-v1.com192.168.10.16 test-v2.com# curl test-v1.comHello, Worldwide version: 1.0.0Hostname: hello-v1.0-6594bd8499-www76# curl test-v1.comHello, Worldwide version: 1.0.0Hostname: hello-v1.0-6594bd8499-zqjtm# curl test-v2.comHello, Worldwide version: 2.0.0Hostname: hello-v2.0-6bd99fb9cd-sn84j# curl test-v2.comHello Worldview version: 2.0.0Hostname: hello-v2.0-6bd99fb9cd-h8862

No problem.

How to be highly available

Hang two more LVS+keepalived in front of work01/ work02 to achieve high availability access to work01/02. You can also use keepalived to bleach a VIP directly on work01 / work02 without the need for additional machines, thus saving costs.

Manage the Pod of the ingress-controller with Deployment and expose the ingress Service using NodePort.

View ingress service

# kubectl get service-o wide-n ingress-nginxNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE SELECTORingress-nginx-controller NodePort 192.20.105.10 192.168.10.17192.168.10.17 80 purl 30698max TCPMagi 443 31303grey TCP22m app.kubernetes.io/component=controller App.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

You can see that port 30698 is exposed, and you can access the v1/v2 version of Pod by accessing port 30698 of any node.

But the port is random and will change after reconstruction, so we can directly access port 80 of work01/02 running ingress-controller Pod.

Get another set of LVS+keepalived in front of work01/02 for high available load.

Using iptables-t nat-L-n-v on work01/02, you can see that port 80 is open through NAT mode, and high traffic will have bottlenecks.

Ingress-controller can be deployed using DaemonSet + HostNetwork.

In this way, port 80 exposed on work01/02 directly uses the host network without NAT mapping, which can avoid performance problems.

On how to parse the Ingress meaning and deployment of Service in K8S is shared here. I hope the above content can be helpful to you and learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report