Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Application practice of k8s-Service,Ingress

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

1. Background

Due to the simplified deployment of K8s cluster, the company's K8s deployment solution uses kubeadm, and follows the online and offline dimensions for cluster partitioning. There are two main reasons why kubeadm should be used: 1. Deployment is convenient and fast, and the expansion of 2.k8s nodes is convenient 3. Deployment is less difficult

two。 System environment

System version kernel version etcd remarks CentOS Linux release 7.5.18043.10.0-862.14.4.el7.x86_64etcdctl version: 3.3.11

3. Related component version

Kubeadmin component name component version name remarks

K8s.gcr.io/kube-apiserver

V1.13.4

Kube-apiserver is one of the most important core components of Kubernetes, which mainly provides the following functions

Provide REST API interface for cluster management, including authentication and authorization, data verification and cluster status change, etc.

Provides a hub for data exchange and communication between other modules (other modules query or modify data through API Server, and only API Server directly manipulates etcd)

K8s.gcr.io/kube-controller-manager

V1.13.4

Kube-scheduler is responsible for assigning the dispatching Pod to the nodes in the cluster. It listens to the kube-apiserver, queries the Pod of the unassigned Node, and then assigns nodes to these Pod according to the scheduling policy (update the NodeName field of the Pod).

The scheduler needs to fully consider a number of factors:

Fair dispatching

Efficient utilization of resources

QoS

Affinity and anti-affinity

Data localization (data locality)

Internal load interference (inter-workload interference)

Deadlines

K8s.gcr.io/kube-scheduler

V1.13.4

Controller Manager, which consists of kube-controller-manager and cloud-controller-manager, is the brain of Kubernetes. It monitors the status of the entire cluster through apiserver and ensures that the cluster is working as expected.

K8s.gcr.io/kube-proxy

V1.13.4

A kube-proxy service runs on each machine, which listens for changes in service and endpoint in API server, and configures load balancing for the service through iptables and so on (only TCP and UDP are supported). Kube-proxy can run directly on the physical machine, or it can be run as static pod or daemonset.

K8s.gcr.io/pause3.1

Kubernetes attaches gcr.io/google_containers/pause:latest to each Pod. This container only takes over the network information of the Pod. The business container shares the network by joining the network container of the network container. This container is created with pod creation and deleted with Pod deletion, as its name is "pause"

This container is the resolution of the namespace of the business pod.

K8s.gcr.io/coredns1.2.6DNS is one of the core functions of Kubernetes, which provides naming services through kube-dns or CoreDNS as a necessary extension of the cluster.

Weaveworks/weave

2.5.2

Weave Net is a multi-host container network scheme, which supports a decentralized control plane. The wRouter on each host establishes the TCP link of the Full Mesh and synchronizes the control information through Gossip. This approach saves the centralized K _ Store V, and can reduce the complexity of deployment to a certain extent.

4. Application practice

(1)。 The cluster has been functioning normally as follows

two。 View deployment configuration

ApiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-dpspec: selector: matchLabels: app: nginx-dp replicas: 1 template: metadata: labels: app: nginx-dpspec: containers:-name: nginx image: nginx:alpine ports:-containerPort: 80

3. View ervice configuration

ApiVersion: v1kind: Servicemetadata: name: nginx-dp-cpfspec: type: NodePort ports:-nodePort: 30001 port: 80 targetPort: 80 protocol: TCP selector: app: nginx-dp

4. View the generated endpoints

5. View service rules

6. View iptables firewall rules generated by kube-proxy; (normal rules)

Note:

1. If the firewall rule appears similar to the following rule:

-A KUBE-EXTERNAL-SERVICES-p tcp-m comment-- comment "default/nginx-dp-cpf: has no endpoints"-m addrtype-- dst-type LOCAL-m tcp-- dport 30001-j REJECT-- reject-with icmp-port-unreachable

two。 Solution.

(1). Net.ipv4.ip_forward = 1 # kernel routing forwarding function

(2). Iptables-P FORWARD ACCEPT # allows iptables FORWARD chain rules to pass through

(3). Iptables-P OUTPUT ACCEPT # allows iptables OUTPUT chain rules to pass through

(4)。 Check whether the deployment labels and service labels settings are associated correctly

(5) .kubectl get endpoints-- show-labels # Note that this rule matches the firewall rule. If none occurs, check the firewall rule.

7. Conduct functional testing (testing in k8s cluster)

8.ingress deployment

# master execution; kubectl apply-f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yamlkubectl apply-f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml

Note:

The nodeSelector field labels is associated with node labels; otherwise, there will be peding status when running ingres pod

8.ingress configuration

# # configure yamlapiVersion: extensions/v1beta1kind: Ingressmetadata: name: test-ingressspec: rules:-http: paths:-path: / backend: serviceName: test-nginx servicePort: 80

# View the creation of ingress

# Test access #

# Ingress based on domain name multi-service

Ingress routed to multiple services is forwarded to different back-end services depending on the request path, such as

Www.breaklinux.com-> 10.109.21.77-> / test1 S1 path 80-> / test2 s2:80apiVersion: extensions/v1beta1kind: Ingressmetadata: name: testspec: rules:-host: www.breaklinux.com http: paths:-path: / test1 backend: serviceName: test-nginx servicePort: 80-path: / Test2 backend: serviceName: nginx-dp-test-01 servicePort: 80

# Virtual Host Ingress

ApiVersion: extensions/v1beta1kind: Ingressmetadata: annotations: nginx.ingress.kubernetes.io/configuration-snippet: "" nginx.ingress.kubernetes.io/proxy-body-size: 10240m nginx.ingress.kubernetes.io/proxy-read-timeout: "36000" nginx.ingress.kubernetes.io/proxy-send-timeout: "36000" nginx.ingress.kubernetes.io/rewrite-target: / $1 nginx.ingress.kubernetes.io/ssl-redirect: "false" name : test-nginx-ingessspec: rules:-host: test-nginx-ingress.dev.k8s.chj.cloud http: paths:-backend: serviceName: test-nginx servicePort: 80 path: /? (. *)

# # access Test based on Virtual Host #

# # because there is no dns service environment binding hosts for testing

Cat / etc/hosts10.109.21.77 www.breaklinux.com test-nginx-ingress.dev.k8s.chj.cloud

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report