Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

An introduction to Kubernetes services and how to create

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

It is believed that many inexperienced people do not know what to do about the introduction and how to create Kubernetes service. Therefore, this article summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.

Preface

The Kubernetes copy mechanism introduced above is precisely because your deployment can be automatically guaranteed to run and remain healthy without any manual intervention. This article continues to introduce another powerful feature of kubernetes, which provides a service layer between the client and pod, providing a single access point, making it more convenient for the client to use pod.

Service

A Kubernetes service is a resource that provides a single constant access point for a group of pod with the same function; when the service exists, its IP address and port will not change, and the client will establish a connection through the IP address and port number, which will be routed to any pod that provides the service.

1. Create a service

The connection to the service is load balanced for all back-end pod. As to which pod belongs to which service, by setting the tag selector when defining the service

[d:\ K8s] $kubectl create-f kubia-rc.yamlreplicationcontroller/kubia created [d:\ K8s] $kubectl get podNAME READY STATUS RESTARTS AGEkubia-6dxn7 0 4skubia-fhxht 1 ContainerCreating 0 4skubia-fhxht 0 ContainerCreating 0 4s

Create a pod using the previous yaml file, and the tag set in the template is app: kubia, so you need to specify the same tag in the yaml for creating a service (and the kubectl expose method described earlier can also create a service):

ApiVersion: v1kind: Servicemetadata: name: kubiaspec: ports:-port: 80targetPort: 8080 selector:app: kubia

First, the resource type is Service, then two ports are specified: the port provided by the port service, the port that the targetPort specifies the port on which the process in the pod listens, and finally the tag selector. The pod with the same tag is managed by the current service.

[d:\ K8s] $kubectl create-f kubia-svc.yamlservice/kubia created [d:\ K8s] $kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 6d15hkubia ClusterIP 10.96.191.193 80/TCP 4s [d:\ K8s] $kubectl exec kubia-6dxn7-- curl-s http://10.96 .191.193You've hit kubia-fhxht [d:\ K8s] $kubectl exec kubia-6dxn7-- curl-s http://10.96.191.193You've hit kubia-fpvc7

After creating the service, you can find that kubia has been assigned CLUSTER-IP, which is an internal ip; as to how to test how to use the kubectl exec command to remotely execute any command on an existing pod container You can specify any one of the three pod names at will. The pod that receives the crul command will be forwarded to Service, and the Service will decide which pod to process the request to. So you can see the execution several times and find that the pod processed each time is different. If you want all requests generated by a specific client to point to the same pod each time, you can set the sessionAffinity attribute of the service to ClientIP.

1.1 configure session stickiness apiVersion: v1kind: Servicemetadata: name: kubiaspec: sessionAffinity: ClientIP ports:-port: 80targetPort: 8080 selector:app: kubia

Everything is the same except that sessionAffinity: ClientIP is added.

[d:\ K8s] $kubectl delete svc kubiaservice "kubia" deleted [d:\ K8s] $kubectl create-f kubia-svc-client-ip-session-affinity.yamlservice/kubia created [d:\ K8s] $kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 6d15hkubia ClusterIP 10.96.51.99 80/TCP 25s [d: [K8s] $kubectl exec kubia-6dxn7-- curl-s http://10.96.51.99You've hit kubia-fhxht [d:\ K8s] $kubectl exec kubia-6dxn7-- curl-s http://10.96.51.99You've hit kubia-fhxht1.2 exposes multiple ports for the same service

If pod listens on two or more ports, the service can also expose multiple ports:

ApiVersion: v1kind: Servicemetadata: name: kubiaspec: ports:-name: httpport: 80targetPort: 8080-name: httpsport: 443targetPort: 8080 selector:app: kubia

Since Node.js only listens on 8080 port, configure both ports in Service to point to the same destination port to see if both of them can be accessed:

[d:\ K8s] $kubectl create-f kubia-svc-named-ports.yamlservice/kubia created [d:\ K8s] $kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 6d18hkubia ClusterIP 10.96.13.178 80/TCP 443/TCP 7s [d:\ K8s] $kubectl exec kubia-6dxn7-- curl-s http://10.96.13.178You've hit kubia-fpvc7 [d:\ K8s] $kubectl exec kubia-6dxn7-- curl-s http://10.96.13.178:443You've hit kubia-fpvc7

You can find that both ports can be accessed.

1.3 use named ports

Port 8080 is specified in Service. If the destination port changes, you need to change it here. You can name the port in the template that defines pod, and you can specify the name directly in Service:

ApiVersion: v1kind: ReplicationControllermetadata: name: kubiaspec: replicas: 3 selector: app: kubia template: metadata: labels:app: kubiaspec: containers:-name: kubia image: ksfzhaohui/kubia ports:-name: http containerPort: 8080

Slightly modified in the previous ReplicationController, the name is specified in port, and the yaml file of Service is also modified to use the name directly:

ApiVersion: v1kind: Servicemetadata: name: kubiaspec: ports:-port: 80targetPort: http selector:app: kubia

TargetPort uses the name http directly:

[d:\ K8s] $kubectl create-f kubia-rc2.yamlreplicationcontroller/kubia created [d:\ K8s] $kubectl get podNAME READY STATUS RESTARTS AGEkubia-4m9nv 1 Running 0 66skubia-bm6rx 1 + 1 Running 0 66skubia-dh87r 1 + 1 Running 0 66s [d:\ K8s] $kubectl create-f kubia-svc2.yamlservice/kubia created [d:\ K8s] $kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 7dkubia ClusterIP 10.96.106.37 80/TCP 10s [d:\ K8s] $kubectl exec kubia-4m9nv-curl-s http://10.96.106.37You've hit kubia-dh87r2. Service discovery

The service provides us with a single constant ip to access the pod. Is it necessary to create the service every time, and then find the CLUSTER-IP of the service, and then give it to other pod to use? this is too troublesome. Kubernets also provides other ways to access the service.

2.1 Discovery of services through environment variables

When pod starts running, Kubernets initializes a series of environment variables that point to existing services; if the created service is earlier than the creation of the client pod, the process on pod can obtain the IP address and port number of the service according to the environment variable

[d:\ K8s] $kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 7d14hkubia ClusterIP 10.96.106.37 80/TCP 14h [d:\ K8s] $kubectl get podNAME READY STATUS RESTARTS AGEkubia-4m9nv 1 Running 0 14hkubia-bm6rx 1 Running 0 14hkubia-dh87r 1max 1 Running 0 14h [d:\ K8s] $kubectl exec kubia-4m9nv envPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=kubia-4m9nvKUBERNETES_SERVICE_PORT_HTTPS=443KUBERNETES_PORT=tcp://10.96.0.1:443KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443KUBERNETES_PORT_443_TCP_PROTO=tcpKUBERNETES_PORT_443_ TCP_PORT=443KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1KUBERNETES_SERVICE_HOST=10.96.0.1KUBERNETES_SERVICE_PORT=443NPM_CONFIG_LOGLEVEL=infoNODE_VERSION=7.10.1YARN_VERSION=0.24.4HOME=/root

Because the pod here is earlier than the creation of the service, there is no relevant information about the service:

[d:\ K8s] $kubectl delete po-- allpod "kubia-4m9nv" deletedpod "kubia-bm6rx" deletedpod "kubia-dh87r" deleted [d:\ K8s] $kubectl get podNAME READY STATUS RESTARTS AGEkubia-599v9 1 to 1 Running 0 48skubia-8s8j4 1 to 1 Running 0 48skubia-dm6kr 1 to 1 Running 0 48s [d:\ K8s] $kubectl exec kubia-599v9 envPATH=/usr/local/sbin: / usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=kubia-599v9...KUBIA_SERVICE_HOST=10.96.106.37KUBIA_SERVICE_PORT=80...

If you delete the pod and create a new pod, so that the service is created before the pod is created, you can find that there are KUBIA_SERVICE_HOST and KUBIA_SERVICE_PORT, which represent the IP address and port number of the kubia service, respectively. In this way, you can obtain the IP and port through the environment variable.

2.2 Discovery services through DNS

There is a default service kube-dns under the namespace kube-system, and its back end is a pod of coredns:

[d:\ K8s] $kubectl get svc-- namespace kube-systemNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkube-dns ClusterIP 10.96.0.10 53. 9153/TCP 9d [d:\ K8s] $kubectl get po-o wide-- namespace kube-systemNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEScoredns-7f9c544f75-h3cwn 1 Running 0 9d 172.17.0.3 minikube coredns-7f9c544f75-x2ttk 1 Running 0 9d 172.17.0.2 minikube

DNS queries for processes running on pod are answered by Kubernets's own DNS server, which knows all the services running on the system; the client's pod can be accessed through a fully qualified domain name (FQDN) if it knows the service name

[d:\ K8s] $kubectl exec kubia-599v9-- curl-s http://kubia.default.svc.cluster.localYou've hit kubia-8s8j4

Kubia corresponds to the service name. Default is the namespace where the service resides, and svc.cluster.local is the configurable cluster domain suffix used in all cluster local service names. If the two pod are in the same namespace, you can omit svc.cluster.local and default and use the service name:

[d:\ K8s] $kubectl exec kubia-599v9-- curl-s http://kubia.defaultYou've hit kubia-dm6kr [d:\ K8s] $kubectl exec kubia-599v9-- curl-s http://kubiaYou've hit kubia-dm6kr2.3 runs shelld:\ K8s > winpty kubectl exec-it kubia-599v9-- sh# curl-s http://kubiaYou've hit kubia-dm6kr# exit in pod

Run bash on a pod container through the kubectl exec command so that there is no need to execute the kubectl exec command for each command to be run; because the winpty tool is used in the windows environment

Connect services outside the cluster

The backend described above is one or more pod services running in the cluster; but there are also situations where you want to expose external services through Kubernetes service features, through Endpoint and external service aliases.

1.Endpoint

The service is not directly connected to the pod; there is a resource in between: it is the Endpoint resource

[d:\ K8s] $kubectl describe svc kubiaName: kubiaNamespace: defaultLabels: Annotations: Selector: app=kubiaType: ClusterIPIP: 10.96.106.37Port: 80/TCPTargetPort: http/TCPEndpoints: 172.17.0.10 8080Session Affinity: NoneEvents: [d:\ K8s] $kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESkubia-599v9 1 Running 0 3h61m 172.17.0.10 minikube kubia-8s8j4 1 Running 0 3h61m 172.17.0.11 minikube kubia- Dm6kr 1/1 Running 0 3h61m 172.17.0.9 minikube

You can see that the Endpoints corresponds to the IP and port of pod; when the client connects to the service, the service agent selects one of these IP and port pairs and redirects the incoming connection to the server listening at that location

two。 Manually configure the endpoint of the service (internal)

If you create a service that does not contain a pod selector, Kubernetes will not create an Endpoint resource; you need to create an Endpoint resource to specify the Endpoint list for that service

ApiVersion: v1kind: Servicemetadata: name: external-servicespec: ports:-port: 80

No selector selector is specified as defined above:

[d:\ K8s] $kubectl create-f external-service.yamlservice/external-service created [d:\ K8s] $kubectl get svc external-serviceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEexternal-service ClusterIP 10.96.241.116 80/TCP 74s [d:\ K8s] $kubectl describe svc external-serviceName: external-serviceNamespace: defaultLabels: Annotations : Selector: Type: ClusterIPIP: 10.96.241.116Port: 80/TCPTargetPort: 80/TCPEndpoints: Session Affinity: NoneEvents:

It can be found that because no selector selector is specified, the Endpoints of external-service is none. In this case, the Endpoint of the service can be manually configured.

ApiVersion: v1kind: Endpointsmetadata: name: external-servicesubsets:-addresses:-ip: 172.17.0.9-ip: 172.17.0.10ports:-port: 8080

The Endpoint object needs to have the same name as the service and contains a list of target IP addresses and ports for the service:

[d:\ K8s] $kubectl create-f external-service-endpoints.yamlendpoints/external-service created [d:\ K8s] $kubectl describe svc external-serviceName: external-serviceNamespace: defaultLabels: Annotations: Selector: Type: ClusterIPIP: 10.96.241.116Port: 80/TCPTargetPort: 80/TCPEndpoints: 172.17.0.10:8080172.17.0.9:8080Session Affinity: NoneEvents: [d:\ K8s] $kubectl exec kubia-599v9-- curl-s http://external-serviceYou've hit kubia-dm6kr

You can find that after creating the Endpoints, the Endpoints of the service external-service has more ip addresses and ports of the pod. You can also execute the request through the kubectl exec.

3. Manually configure the endpoint of the service (external)

The ip port inside the kubernetes is configured above in endpoint. You can also configure the external ip port to start a service outside the kubernetes:

ApiVersion: v1kind: Endpointsmetadata: name: external-servicesubsets:-addresses:-ip: 10.13.82.21ports:-port: 8080

The above configuration of 10.13.82.21 tomcat 8080 is a normal service, which can be started locally.

[d:\ K8s] $kubectl create-f external-service-endpoints2.yamlendpoints/external-service created [d:\ K8s] $kubectl create-f external-service.yamlservice/external-service created [d:\ K8s] $kubectl exec kubia-599v9-- curl-s http://external-serviceok

After testing, the response of the external service can be returned.

4. Create an external service alias

In addition to manually configuring the Endpoint of the service instead of exposing the external service method, you can also specify an alias for the external service, such as a domain name: api.ksfzhaohui.com for 10.13.82.21

ApiVersion: v1kind: Servicemetadata: name: external-servicespec: type: ExternalName externalName: api.ksfzhaohui.com ports:-port: 80

To create a service with an alias for an external service, set a type field to create the service resource to the domain name of the service specified by ExternalName; in externalName:

[d:\ K8s] $kubectl create-f external-service-externalname.yamlservice/external-service created [d:\ K8s] $kubectl exec kubia-599v9-- curl-s http://external-service:8080ok

After testing, the response of the external service can be returned.

Expose services to external clients

To expose some services to the outside world, kubernetes provides three ways: NodePort services, LoadBalance services and Ingress resources, which are described below and in practice.

Services of type 1.NodePort

Create a service and set its type to NodePort. By creating a NodePort service, you can have kubernetes retain a port on all its nodes (all nodes use the same port number), and then forward incoming connections to pod

ApiVersion: v1kind: Servicemetadata: name: kubia-nodeportspec: type: NodePort ports:-port: 80targetPort: 8080nodePort: 30123 selector:app: kubia

Specify service type as NodePort and node port as 30123

D:\ K8s] $kubectl create-f kubia-svc-nodeport.yamlservice/kubia-nodeport created [d:\ K8s] $kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 31dkubia-nodeport NodePort 10.96.59.16 80:30123/TCP 3s [d:\ K8s] $kubectl exec kubia-7fs6m -- curl-s http://10.96.59.16You've hit kubia-m487j

To access the internal pod service externally, you need to know the IP of the node. The node we use here is minikube, because the minikube is installed on the local windows system and can be accessed directly using the internal ip of minikube.

D:\ K8s] $kubectl get nodes-o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEminikube Ready master 34d v1.17.0 192.168.99.108 Buildroot 2019.02.7 4.19.81 docker://19.3.5

2.LoadBalance type service

Compared with the NodePort mode, which can access the internal pod,LoadBalance through port 30312 of any node, it has its own unique publicly accessible IP address; LoadBalance is actually an extension of NodePort so that the service can be accessed through a dedicated load balancer

ApiVersion: v1kind: Servicemetadata: name: kubia-loadbalancerspec: type: LoadBalancer ports:-port: 80targetPort: 8080 selector:app: kubia

Specify the service type as LoadBalancer, no need to specify node port

D:\ K8s] $kubectl create-f kubia-svc-loadbalancer.yamlservice/kubia-loadbalancer created [d:\ K8s] $kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 31dkubia-loadbalancer LoadBalancer 10.96.207.113 80:30038/TCP 7skubia-nodeport NodePort 10.96.59.16 80:30123/TCP 32m

You can see that although we did not specify a node port, the 30038 node port was automatically started after creation.

So it can be found that the service (node IP+ node port) can also be accessed by using NodePort; it can also be accessed through EXTERNAL-IP, but with Minikube, there will be no external IP address, and the external IP address will always be in the pending state.

3. Understand and prevent unnecessary network hops

When an external client connects to a service through a node port, the randomly selected pod does not necessarily run on the same node that receives the connection; this extra hop count can be blocked by configuring the service to redirect only external traffic to the pod running on the node receiving the connection

ApiVersion: v1kind: Servicemetadata: name: kubia-nodeport-onlylocalspec: type: NodePort externalTrafficPolicy: Local ports:-port: 80targetPort: 8080nodePort: 30124 selector:app: kubia

This is done by setting the externalTrafficPolicy field in the spec section of the service

4.Ingress type service

Each LoadBalancer service needs its own load balancer and a unique public IP address, while Ingress only needs a public network IP to provide access to many services; when the client sends an HTTP request to Ingress, the Ingress forwards to the corresponding service according to the hostname and path of the request

4.1 Ingress Controller

Ingress resources can work only if the Ingress controller runs in the cluster; different Kubernetes environments use different controllers, but some do not provide a default controller; the Minikube I use here needs to enable add-ons to use the controller

[d:\ Program Files\ Kubernetes\ Minikube] $minikube addons list- addon-manager: enabled- dashboard: enabled- default-storageclass: enabled- efk: disabled- freshpod: disabled- gvisor: disabled- helm-tiller: disabled- ingress: disabled- ingress-dns: disabled- logviewer: disabled- metrics-server: disabled- nvidia-driver-installer: disabled- nvidia-gpu-device-plugin: disabled- registry: disabled- registry-creds: disabled- storage-provisioner: enabled- storage-provisioner-gluster: disabled

Listing all the accessory components, you can see that ingress is not available, so you need to open it.

[d:\ Program Files\ Kubernetes\ Minikube] $minikube addons enable ingress* ingress was successfully enabled

After startup, you can view the pod under the kube-system namespace

[d:\ K8s] $kubectl get pods-n kube-systemNAME READY STATUS RESTARTS AGEcoredns-7f9c544f75-h3cwn 1 Running 0 55dcoredns-7f9c544f75-x2ttk 1 Running 0 55detcd-minikube 1/1 Running 0 55dkube-addon-manager-minikube 1/1 Running 0 55dkube-apiserver-minikube 1/1 Running 0 55dkube-controller-manager-minikube 1/1 Running 2 55dkube-proxy-xtbc4 1/1 Running 0 55dkube-scheduler-minikube 1/1 Running 2 55dnginx-ingress-controller-6fc5bcc8c9-nvcb5 0/1 ContainerCreating 0 8sstorage-provisioner 1/1 Running 0 55d

You can find that a pod named nginx-ingress-controller is being created and will remain in the pull mirror state with the following error:

Failed to pull image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1": rpc error: code = Unknown desc = context canceled

This is because the image under quay.io cannot be downloaded in China, so you can use Aliyun image:

Image: registry.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1

You can modify the image in the deploy/static/mandatory.yaml file under ingress-nginx to an Ali cloud image, and then recreate it:

[d:\ K8s] $kubectl create-f mandatory.yamlnamespace/ingress-nginx createdconfigmap/nginx-configuration createdconfigmap/tcp-services createdconfigmap/udp-services createdserviceaccount/nginx-ingress-serviceaccount createdclusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole createdrole.rbac.authorization.k8s.io/nginx-ingress-role createdrolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding createdclusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding createddeployment.apps/nginx-ingress-controller created

Check the pod under the kube-system namespace again

[d:\ K8s] $kubectl get pods-n kube-systemNAME READY STATUS RESTARTS AGEcoredns-7f9c544f75-h3cwn 1 56dcoredns-7f9c544f75-x2ttk 1 Running 0 56dcoredns-7f9c544f75-x2ttk 1 Running 0 56detcd-minikube 1 Running 0 56dkube-addon-manager-minikube 1/1 Running 0 56dkube-apiserver-minikube 1/1 Running 0 56dkube-controller-manager-minikube 1/1 Running 2 56dkube-proxy-xtbc4 1/1 Running 0 56dkube-scheduler-minikube 1/1 Running 2 56dnginx-ingress-controller-6fc5bcc8c9-nvcb5 1/1 Running 0 10mstorage-provisioner 1/1 Running 0 56d

Now that nginx-ingress-controller is in Running state, you can use Ingress resources.

4.2 Ingress Resources

After the Ingress controller starts, you can create Ingress resources

ApiVersion: extensions/v1beta1kind: Ingressmetadata: name: kubiaspec: rules:-host: kubia.example.comhttp: paths:-path: / backend: serviceName: kubia-nodeport servicePort: 80

Specify the resource type as Ingress and set a single rule that all requests for sending kubia.example.com will be forwarded to the kubia-nodeport service with port 80

[d:\ K8s] $kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 53dkubia-nodeport NodePort 10.96.204.104 80:30123/TCP 21h [d:\ K8s] $kubectl create-f kubia-ingress.yamlingress.extensions/kubia created [d:\ K8s] $kubectl get ingressNAME HOSTS ADDRESS PORTS AGEkubia kubia.example.com 192.168.99.108 80 6m4s

You need to map the domain name to ADDRESS:192.168.99.108 and modify the hosts file. The following can be accessed directly with the domain name, and eventually the request will be forwarded to the kubia-nodeport service.

The request process is as follows: the request domain name in the browser will first query the domain name server, and then the DNS returns the IP address of the controller; the client sends the request to the controller and specifies the kubia.example.com; in the header, and then the controller determines which service the client needs to access based on the header information; then looks at the pod IP through the Endpoint object associated with the service and forwards the request to one of them

4.3 Ingress exposes multiple services

Rules and paths are arrays and can be configured with multiple

ApiVersion: extensions/v1beta1kind: Ingressmetadata: name: kubia2spec: rules:-host: kubia.example.comhttp: paths:-path: / v1backend: serviceName: kubia-nodeport servicePort: 80-path: / v2backend: serviceName: kubia-nodeport servicePort: 80-host: kubia2.example.comhttp: paths:-path: / backend: serviceName: kubia-nodeport servicePort: 80

Multiple host and path are configured, and the same service is mapped here for convenience

[d:\ K8s] $kubectl create-f kubia-ingress2.yamlingress.extensions/kubia2 created [d:\ K8s] $kubectl get ingressNAME HOSTS ADDRESS PORTS AGEkubia kubia.example.com 192.168.99.108 80 41mkubia2 kubia.example.com,kubia2.example.com 192.168.99.108 80 15m

You also need to configure the host file. The test is as follows:

4.4 configure Ingress to handle TLS transport

The messages described above are all based on the Http protocol, and the Https protocol requires the configuration of relevant certificates; when the client creates a TLS connection to the Ingress controller, the controller terminates the TLS connection; the client and the Ingress controller are encrypted, but there is no encryption between the Ingress controller and the pod; for the controller to do so, the certificate and private key need to be attached to the Ingress

[root@localhost batck-job] # openssl genrsa-out tls.key 2048Generating RSA private key 2048 bit long modulus.+++....+++e is 65537 (0x10001) [root@localhost batck-job] # openssl req-new-x509-key tls. Key-out tls.cert-days 360-subj / CN=kubia.example.com [root@localhost batck-job] # ll-rw-r--r--. 1 root root 1115 Feb 11 01:20 tls.cert-rw-r--r--. 1 root root 1679 Feb 11 01:20 tls.key

The two generated files create secret

[d:\ K8s] $kubectl create secret tls tls-secret-- cert=tls.cert-- key=tls.keysecret/tls-secret created

You can now update the Ingress object so that it also receives HTTPS requests from kubia.example.com

ApiVersion: extensions/v1beta1kind: Ingressmetadata: name: kubiaspec: tls:-hosts:-kubia.example.comsecretName: tls-secret rules:-host: kubia.example.comhttp: paths:-path: / backend: serviceName: kubia-nodeport servicePort: 80

Specify the relevant certificate in tls

[d:\ K8s] $kubectl apply-f kubia-ingress-tls.yamlWarning: kubectl apply should be used on resource created by either kubectl create-- save-config or kubectl applyingress.extensions/kubia configured

Access the https protocol through a browser, as shown in the following figure

Pod ready signal

Pod can be used as the backend of the service as long as the tag of the pod matches the pod selector of the service, but if the pod is not ready, it cannot process the request, so you need to be ready to probe to check whether the pod is ready. If the check is successful, you can process the message as the backend of the service.

1. Ready probe type

There are three types of ready probes:

Exec probe: where the process is executed, the status of the container is confirmed by the exit status code of the process

Http get probe: sends a HTTP GET request to the container to determine whether the container is ready by the HTTP status code of the response

Tcp socket probe: it opens a TCP connection to the specified port of the container. If the connection is established, the container is considered ready.

Kubernetes calls the probe periodically and takes action based on the results of the ready probe. If a pod reports that it is not ready, the pod is removed from the service. If pod is ready again, add pod again

two。 Just add a probe to pod

Edit ReplicationController, modify pod template and add ready probe

[d:\ K8s] $kubectl edit rc kubialibpng warning: iCCP: known incorrect sRGB profilereplicationcontroller/kubia edited [d:\ K8s] $kubectl get podsNAME READY STATUS RESTARTS AGEkubia-7fs6m 1 22dkubia-m487j 1 Running 0 22dkubia-q6z5w 1 22dkubia-q6z5w 1 Running 0 22d

Edit the ReplicationController as shown below to add the readinessProbe

ApiVersion: v1kind: ReplicationControllermetadata: name: kubiaspec: replicas: 3 selector: app: kubia template: metadata: labels:app: kubiaspec: containers:-name: kubia image: ksfzhaohui/kubia ports:-containerPort: 8080 readinessProbe: exec:command:-ls-/ var/ready

The ready probe will periodically execute ls/var/ready commands within the container. If the file exists, the ls command returns the exit code 0, otherwise returns a non-zero exit code; if the file exists, the ready probe will succeed, otherwise it will fail

After editing the ReplicationController, we have not yet generated a new pod, so we can find that the READY of the above pod is 1, indicating that we are ready to process the message.

[d:\ K8s] $kubectl delete pod kubia-m487jpod "kubia-m487j" deleted [d:\ K8s] $kubectl get podsNAME READY STATUS RESTARTS AGEkubia-7fs6m 1 22dkubia-cxz5v 1 Running 0 22dkubia-cxz5v 0 Running 0 22d

Delete a pod and immediately create a pod with a ready probe. You can find that the long-term READY is 0.

This paper first introduces the basic knowledge of services and how to create service discovery services; then it introduces the direct correlation between services and pod-endpoint;. Finally, it focuses on three ways to expose services to external clients.

After reading the above, have you mastered the introduction of Kubernetes service and how to create it? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report