In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "how to deploy Nginx Ingress". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to deploy Nginx Ingress.
Overview
In the open source community, there are many ways to implement Kubernetes's Ingress Controller, and Nginx Ingress is only one of them. Of course, it is also the most widely used way to implement Ingress Controller in the community. It not only has powerful functions, but also has high performance.
What is Nginx Ingress?
Nginx Ingress is an object of Kubernetes that translates the nginx-ingress declared by the user into the forwarding rules of nginx through nginx-ingress-controller. The core problem is traffic forwarding and east-west load balancing. The main working principle is that nginx-ingress-controller listens for changes in api-server (Kubernetes Informers), and changes the configuration of Nginx instances and forwards traffic through the changes of Ingress, Service, Endpoint, Secret, ConfigMap and other objects of watch Kubernetes.
At present, there are two main ways to implement Nginx Ingress in the community.
Implementation of Kubernetes Open Source Community
The official implementation of Nginx
Why do you need Nginx Ingress
In the open source community, there are many ways to implement Ingress Controller. Each Controller has its own applicable scenarios and its own advantages and disadvantages. Why is nginx-ingress-controller recommended? Let's discuss what kind of trouble the business will bring if we don't use nginx-ingress-controller.
Taking ingress controller recommended by default on Tencent Cloud CCS console (hereinafter referred to as TKE) as an example, there are some problems as follows:
The Ingress capability of CLB type can not meet the needs of existing business. For example, the same public network portal cannot be shared, and the default forwarding backend is supported.
The original business has used nginx-inrgess, and the operation and maintenance staff are used to configuring nginx.conf and do not want to make too many changes.
The above problems can be well solved by using nginx-ingress-controller.
What prerequisites are required to deploy nginx-ingress-operator components deployment installation
Go to the Tencent Cloud CCS console, select the cluster where Nginx Ingress needs to be deployed, enter Cluster-component Management, and deploy and install Nginx Ingess components, as shown below:
Components are installed and running normally
Deployment plan
TKE provides a variety of deployment solutions for nginx-ingress-controller and ways to access LB to meet the needs of different business scenarios. Different solutions are described below.
Nginx-ingress-controller deployment scenario 1: DaemonSet + node pool
As a key traffic access gateway, Nginx is a crucial component. It is not recommended to deploy Nginx in the same node as other services. You can deploy it by setting a stain on the node pool. For more information on node pool, please see Tencent Cloud CCS node pool overview.
When using this deployment scenario, you should be aware of the following:
Prepare the node pool where the nginx-ingress-controller is deployed in advance, and set the stain Taint and Label of the node pool to prevent other Pod from scheduling to the node pool.
Ensure that the nginx-ingress-operator components have been successfully deployed and installed, and refer to the above guidelines for deployment.
Enter the component details and create
Nginx-ingress-controller
Instance (multiple instances can exist in a single cluster)
Deployment method Select specified Node Pool DaemonSet deployment
Set tolerance stain
Set Request/Limit, where Request needs to be set smaller than the model configuration of node pool (nodes have resource reservation to prevent instances from being unavailable due to insufficient resources). Limit may not be set.
Other parameters can be set according to business needs.
Option 2: Deployment + HPA
Using the Deployment + HPA solution for deployment, you can configure stain and tolerance according to your business needs, and deploy Nginx and business Pod separately. Match HPA, set CPU/ memory and other indicators for auto scaling.
When using this deployment scenario, you should be aware of the following:
Set the Label of the node on which the nginx-ingress-controller will be deployed in the cluster
Make sure that the nginx-ingress-operator components have been successfully deployed and installed, and refer to the above instructions for deployment.
Enter the component details and create
Nginx-ingress-controller
Instance (multiple instances can exist in a single cluster)
Choose a custom Deployment+HPA deployment method
Set HPA trigger policy
Set up Request/Limit
Set node scheduling policy and recommend nginx-ingress-controller to monopolize nodes to avoid unavailability caused by encroachment of other business resources.
Other parameters can be set according to business needs.
How to deploy Nginx frontend to LB
The use process and deployment proposal for deploying nginx-ingress-operator and nginx-ingress-controller in TKE clusters are described above. To complete the above steps, only the relevant components of Nginx are deployed in the cluster, but to receive external traffic, you also need to configure and configure the front-end LB of nginx. Currently, TKE has completed the production support for Nginx Ingress, and you can choose one of the following deployment modes according to your business needs.
Solution 1: VPC-CNI mode cluster uses Service of CLB through Nginx (recommended)
Pre-conditions (if one of them is satisfied):
The network plug-in of the cluster itself is VPC-CNI
The network plug-in of the cluster itself is Global Router, and VPC-CNI support has been enabled (mixed use of the two modes)
The load example we deploy with node pool has good performance. All Pod use elastic Nic. The Pod of elastic Nic supports CLB direct binding Pod, can bypass NodePort, and does not need to maintain CLB manually, and supports automatic expansion and reduction, which is the most ideal solution.
Option 2: Global Router mode cluster uses Service of ordinary LoadBalancer mode
Currently, TKE's default implementation of Service of LoadBalancer type is based on the fact that NodePort,CLB binds the NodePort of each node as the backend RS, forwards the traffic to the node's NodePort, and then routes the request to the corresponding backend Pod of the Service (refers to the Pod of Nginx Ingress Controller) through Iptables or IPVS.
If your cluster does not support VPC-CNI network mode, you can access Service traffic through regular LoadBalancer access. This is the easiest way to deploy Nginx Ingress on TKE. Traffic is forwarded through one layer of NodePort and one more layer, but there may be the following problems:
The forwarding path is long. After the traffic is sent to NodePort, it will go through Kubernetes internal load balancing and forward to Nginx via Iptables or IPVS, which will increase the network time.
After NodePort, SNAT is bound to occur. If the traffic is too concentrated, it will easily lead to source port exhaustion or conntrack insertion conflicts, resulting in packet loss and causing some traffic anomalies.
The NodePort of each node also acts as a load balancer. If CLB binds the NodePort of a large number of nodes, the state of load balancing is scattered on each node, which can easily lead to global load imbalance.
CLB will detect the health of the NodePort, and the probe packet will eventually be forwarded to the Pod of the Nginx Ingress. If the number of nodes bound by the CLB is more than the Pod of the Nginx Ingress, the probe packet will cause greater pressure on the Nginx Ingress.
Option 3: use HostNetwork + LB
Although solution 2 is the easiest way to deploy, the traffic will pass through one layer of NodePort, and there may be a problem as described above. We can have Nginx Ingress bind the node IP + port (80443) directly using HostNetwork,CLB. Because Pod using HostNetwork,nginx ingress cannot be scheduled to the same node, port snooping conflicts can be avoided. Since TKE has not yet productised this solution, you can plan ahead, select some nodes, specifically deploy nginx-ingress-controller, Label the nodes, and then deploy them in the form of DaemonSet (that is, the first deployment scheme of nginx-ingress-controller).
How to integrate Monitoring
By integrating the high-performance cloud native monitoring service of Tencent Cloud Container team (portal: https://console.cloud.tencent.com/tke2/prometheus), TKE can also learn about Prometheus,Kvass and how to take advantage of kvass-based Prometheus clustering technology in the previously published article "how to use Prometheus to monitor a Kubernetes cluster of 100, 000 container".
Bind monitoring instance
View monitoring data
How to collect and consume logs
By integrating Tencent Cloud log service CLS, TKE provides a complete set of production capabilities to achieve the log collection and consumption capabilities of nginx-ingress-controller. However, you need to pay attention to the following:
Pre-requirements: ensure that log collection is enabled in the current cluster
In the nginx-ingress-controller instance, configure options related to log collection.
At this point, I believe you have a deeper understanding of "how to deploy Nginx Ingress". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.