In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
* * this article is based on OpenShift 3.11 Magi Kubernetes 1.11 for testing *
1. Why does OpenShift need Router and Route?
As the name implies, Router is the router and Route is the route configured in the router. These two concepts in OpenShift are designed to address the need to access services from outside the cluster (that is, from places other than cluster nodes). I don't know why OpenShift changed the Ingress in Kubernetes to Router, but I think the name Ingress is more appropriate.
A simple schematic diagram of the two processes of accessing applications in pod from the outside through router and from within through servide is as follows:
In the figure above, the three pod of an application are located on node1,node2 and node3, respectively. There are three layers of IP address concepts in OpenShift:
Pod's own IP address can be compared to the fixed IP of a virtual machine in OpenStack. It only makes sense within the cluster.
The IP address of the service. Service usually has ClusterIP, which is also an IP address within the cluster.
The external IP address of the application can be compared to the floating IP in OpenStack, or IDC IP (there is a NAT mapping relationship between the floating IP and the floating address).
Therefore, there are only two ways to access applications in pod from outside the cluster:
One is to use an agent (proxy) to convert the external IP address into the backend Pod IP address. This is what OpenShift router/route thinks. The router service in OpenShift is a cluster infrastructure service that runs on a specific node (usually an infrastructure node) and is created and managed by the cluster administrator. It can have multiple copies (pod). There can be multiple route in router, and each route can find its backend pod list through the domain name of external HTTP request, and forward network packets. That is, the application in pod is exposed to the domain name of the public network, so that users can access the application through the domain name. This is actually a seven-layer load balancer. OpenShift is implemented with HAProxy by default, but it also supports other implementations, such as F 5. 5.
The other is to expose the service directly outside the cluster. This approach will be explained in detail in the article "Service Service".
2. How does OpenShift use HAProxy to implement router and route? 2.1 Router deployment
When an OpenShift cluster is deployed with ansible in the default configuration, a HAProxy pod is run as Host networking on the cluster Infra node, and it listens on ports 80 and 443 of all network cards.
[root@infra-node3 cloud-user] # netstat-lntp | grep haproxytcp 0 0127.0.0.1 lntp 10443 0.0.0.0 LISTEN 583/haproxy tcp 0 0127.0.1 lntp 10444 0.0.0.0 lntp * LISTEN 583/haproxy tcp 000. 0.0.0 80 0.0.0.0 * LISTEN 583/haproxy tcp 0 0 0.0.0.0VR 443 0.0.0.0V * LISTEN 583/haproxy
Of these, 10443 and 10444 on 172.0.0.1 are for HAproxy's own use. There will be an explanation below.
Therefore, there can be only one HAProxy pod per infra node because these ports can only be occupied once. If the scheduler cannot find a node that meets the requirements, the scheduling of the router service will fail:
0ax 7 nodes are available: 2 node (s) didn't have free ports for the requested pod ports, 5 node (s) didn't match node selector
OpenShift HAProxy Router supports two deployment methods:
One is the common single Router service deployment, which has one or more instances (pod) distributed on multiple nodes and is responsible for external access to the services deployed on the entire cluster.
The other is sharding deployment. At this point, there will be multiple Router services, and each Router service is responsible for the specified number of project, and the two are mapped by label. This is a solution to solve the problem of insufficient performance of a single Router.
OpenShift provides the oc adm router command to create the router service.
Create a router:
[root@master1 cloud-user] # oc adm router router2-- replicas=1-- service-account=routerinfo: password for stats user admin has been set to J3YyPjlbqfmuri-> Creating router router2. Warning: serviceaccounts "router" already exists clusterrolebinding.authorization.openshift.io "router-router2-role" created deploymentconfig.apps.openshift.io "router2" created service "router2" created-- > Success
See the official document https://docs.openshift.com/container-platform/3.11/install_config/router/default_haproxy_router.html for detailed deployment methods.
2.2 HAProxy process in Router pod
In each pod of the Router service, the openshift-router process starts a haproy process:
UID PID PPID C STIME TTY TIME CMD1000000+ 100 Nov21? 00:14:27 / usr/bin/openshift-router1000000+ 16011 10 12:42? 00:00:00 / usr/sbin/haproxy-f / var/lib/haproxy/conf/haproxy.config-p / var/lib/haproxy/run/haproxy.pid-x / var/lib/haproxy/run/haproxy.sock-sf 16004
View the configuration file used by haproxy (only in part):
-base / etc/-base / etc/-forwarded-.: / var/lib/haproxy/conf/error-page---keep--request inspect--request content accept-uri / http-request del- insensitive (RFC), we need to convert the-request set-header Host% we need to redirect//var/lib/haproxy/conf/os_route_http_redirect.map) -% [base Map_reg (/ var/lib/haproxy/conf/# determined by the next backend the chain-request inspect--request content accept {req_ssl_hello_type the connection is SNI and the route is a passthrough don # the SNI, we also need to compare it-insensitive mode (by converting it to lowercase) as RFC-/ var/lib/haproxy/conf/os_sni_passthrough.map) -% [req.ssl_sni,lower Map_reg (/ var/lib/haproxy/conf/os_tcp_be.map)]-request set-header X-Forwarded-Host%-request set-header X-Forwarded-Port%-request set-header X-Forwarded-Proto http!-request set-header X-Forwarded-Proto https-request set-header X-Forwarded-Proto-Version h3 {ssl_fc_alpn-- request add-header Forwarded =% [src] Host=% [req. hdr (host)]; proto=% [req.hdr (X-Forwarded-Proto)]; proto-version=% [req.hdr (Xmuri ForwardedMurto Proto Franz).:.: cookie 8669a19afc9f0fed6824feb9fb1cf4ac weight
For simplicity, the above is only part of the configuration file, which consists of three main types:
Global configuration, such as the maximum number of connections maxconn, timeout timeout, etc., and the front section, that is, frontend configuration, HAProxy listens for external https and http requests on ports 443,80 by default.
Backend, that is, the backend configuration of each service, contains many key contents, such as backend protocol (mode), load balancing method (balance), backend list (server, in this case pod, including its IP address and port), certificate, etc.
Therefore, the router function of OpenShift needs to be able to manage and control these three parts.
For a detailed introduction to load balancers and HAProxy, please refer to Neutron understanding (7): this article on how Neutron virtualizes load balancers.
2.3 Global configuration Management
To specify or modify the global configuration of HAProxy, OpenShift provides two ways:
(1) the first is to use the oc adm router command to specify various parameters when creating a router, such as-- max-connections is used to set the maximum number of connections. For example:
Oc adm router-max-connections=200000-ports='81:80,444:443' router3
The maxconn of the created HAProxy will be 20000 focus Router3. The exposed ports of this service are 81 and 444, but the ports of HAProxy pod are still 80 and 443.
(2) set the global configuration of dc/ by setting the environment variable of router.
There is a complete list of environment variables in the official documentation https://docs.openshift.com/container-platform/3.4/architecture/core_concepts/routes.html#haproxy-template-router. For example, after running the following command
Oc set env dc/router3 ROUTER_SERVICE_HTTPS_PORT=444 ROUTER_SERVICE_HTTP_PORT=81 STATS_PORT=1937
Router3 will be redeployed. The https listening port of the newly deployed HAProxy is 444. Http listening port is 80, and the statistical port is 1937.
2.4 route and HAProxy backend of OpenShift passthrough type
(1) create a route through the OpenShift Console or oc command, which exposes the jenkins service of the sit project to the domain name sitjenkins.com.cn:
Create a route on the interface:
Results:
Name: sitjenkins.com.cnNamespace: sitLabels: app=jenkins-ephemeral template=jenkins-ephemeral-templateAnnotations: Requested Host: sitjenkins.com.cnPath: TLS Termination: passthroughEndpoint Port: jenkinsWeight: 100100 Endpoints: 10.128.2.15 app=jenkins-ephemeral template=jenkins-ephemeral-templateAnnotations 8080 10.131.0.10:8080
Here, service name acts as an intermediary, connecting the route to the endpoint of the service (that is, the pod).
(2) there is an extra backend in the configuration file of the HAProxy process in the two pod of the router service:
# Secure backend, pass throughbackend be_tcp:sit:sitjenkins.com.cn balance source hash-type consistent timeout check 5000ms} server pod:jenkins-1-bqhfj:jenkins:10.128.2.15:8080 10.128.2.15:8080 weight 256 check inter 5000ms server pod:jenkins-1-h3fff:jenkins:10.131.0.10:8080 10.131.0.10:8080 weight 256 check inter 5000ms
Among them, these backend server are actually pod, which openshift found through service name in step (1). Balance is a load balancing strategy, which will be explained later.
(3) one more record is added to the file / var/lib/haproxy/conf/os_sni_passthrough.map
Sh-4.2$ cat / var/lib/haproxy/conf/os_sni_ passports. Map ^ sitjenkins\ .com\ .cn (: [0-9] +)? (/. *)? $1
(4) one more record in the file / var/lib/haproxy/conf/os_tcp_be.map
Sh-4.2$ cat / var/lib/haproxy/conf/os_tcp_ be.map^ sitjenkins\ .com\ .cn (: [0-9] +)? (/ .*)? $be_tcp:sit:sitjenkins.com.cn
(5) the logic for HAProxy to select the added backend in step (2) for the route according to the above map file is as follows
Frontend public_ssl # explanation: front-end protocol https Bind: 443 # # Front-end port 443 tcp-request inspect-delay 5s tcp-request content accept if {req_ssl_hello_type 1} # if the connection is SNI and the route is a passthrough don't use the termination backend, just use the tcp backend # for the SNI case, we also need to compare it in case-insensitive mode (by converting it to lowercase) as RFC 4343 says acl sni req.ssl_sni-m found # # check that https request supports sni acl sni_passthrough req.ssl_sni,lower Map_reg (/ var/lib/haproxy/conf/os_sni_passthrough.map)-m found # # check the hostname sent through sni in the os_sni_patthrough.map file use_backend% [req.ssl_sni,lower Map_reg (/ var/lib/haproxy/conf/os_tcp_be.map)] if sni sni_passthrough # # get backend name # if the route is SNI and NOT passthrough enter the termination flow use_backend be_sni if sni # non SNI requests should enter a default termination backend rather than the custom cert SNI backend since it # will not be able to match a cert to an SNI host default_backend be_no_sni from oc_tcp_be.map based on sni hostname
(6) the HAPorxy process will restart to apply the modified configuration file.
Some background knowledge required to understand the script in (5):
SNI:TLS Server Name Indication (SNI), an extension of the TLS network protocol, will inform the server side (server) of the domain name (hostname) it will connect to before the TLS handshake, so that the server side can return the specified certificate to the client segment according to the hostname, thus enabling the server side to support multiple certificates required by multiple hostname. Please refer to https://en.wikipedia.org/wiki/Server_Name_Indication for details.
OpenShift passthrough route: this SSL connection to route will not be terminated by TLS on router (termination), but router will pass through the TLS link to the backend. It is explained below.
HAProxy support for SNI: HAProxy will select a specific hostname based on the hostname in the SNI information. Please refer to https://www.haproxy.com/blog/enhanced-ssl-load-balancing-with-server-name-indication-sni-tls-extension/ for details.
HAProxy ACL: for more information, please see https://www.haproxy.com/documentation/aloha/10-0/traffic-management/lb-layer7/acls/
From the blue comment above, we can see that the HAProxy process obtains the backend name be_tcp:sit:sitjenkins.com.cn in the os_tcp_be.map file through the domain name sitjenkins.com.cn passed through SNI in the https request, which corresponds to the backend in step (2).
The HAProxy used by OpenShift's router uses domain name-based load balancer routing. The example is as follows. For more information, please see the official documentation.
2.5 route and HAProxy of OpenShift edge and re-encrypt types
HAProxy front end: the front end still listens for external HTTPS requests on port 443.
Sni
However, backend be_sni is used when the TLS termination type is not passthrough (edge or re-encrypt).
Backend be_sni server fe_sni 127.0.0.1:10444 weight 1 send-prox
This backend is served by the native 127.0.0.1 10444, so it goes to the front end fe_sni:
Frontend fe_sni # terminate ssl on edge bind 127.0.0.1:10444 ssl no-sslv3 crt / var/lib/haproxy/router/certs/default.pem crt-list / var/lib/haproxy/conf/cert_config.map accept-proxy mode http . # map to backend # Search from most specific to general path (host case). # Note: If no match, haproxy uses the default_backend, no other # use_backend directives below this will be processed. Use_backend% [base,map_reg (/ var/lib/haproxy/conf/os_edge_reencrypt_be.map)] default_backend openshift_default
Map mapping file:
Sh-4.2$ cat / var/lib/haproxy/conf/os_edge_reencrypt_ be.map^ edgejenkins\ .com\ .cn (: [0-9] +)? (/ .*)? $be_edge_http:sit:jenkins-edge
HAProxy backend of Edge type route:
Backend be_edge_http:sit:jenkins-edge mode http option redispatch option forwardfor balance leastconn timeout check 5000ms. Server pod:jenkins-1-bqhfj:jenkins:10.128.2.15:8080 10.128.2.15:8080 cookie 71c6bd03732fa7da2f1b497b1e4c7993 weight 256 check inter 5000ms server pod:jenkins-1-h3fff:jenkins:10.131.0.10:8080 10.131.0.10:8080 cookie fa8d7fb72a46958a7add1406e6d26cc8 weight 256 check inter 5000ms
HAProxy backend of Re-encrypt type route:
-
Http-request set-header X-Forwarded-Host% [req.hdr (host)]
Http-request set-header X-Forwarded-Port% [dst_port]
Http-request set-header X-Forwarded-Proto http if! {ssl_fc}
Http-request set-header X-Forwarded-Proto https if {ssl_fc}
Http-request set-header X-Forwarded-Proto-Version h3 if {ssl_fc_alpn-I h3}
Server pod:jenkins-1-bqhfj:jenkins:10.128.2.15:8080 10.128.2.15:8080 cookie... The link between weight 256ssl verifyhost jenkins.sit.svc verify required ca-file / var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt check inter 5000ms # and the backend is ssl encrypted, and check hostname server pod:jenkins-1-h3fff:jenkins:10.131.0.10:8080 10.131.0.10 var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt check inter 5000ms 8080 cookie. Weight 256 ssl verifyhost jenkins.sit.svc verify required ca-file / var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt check inter 5000ms
You can see here that the connection is encrypted using the key again, but I don't know why mode is still http, not https.
2.6 set and modify route configuration
There are several important aspects of route configuration:
(1) the termination mode of SSL. There are three types:
The edge:TLS is terminated on router, and the non-SSL network packets are forwarded to the back-end pod. Therefore, you need to install the TLS certificate on router. If not installed, the default certificate of router will be used.
Passthrough: encrypted network packets are sent directly to pod,router without TLS termination, because there is no need to configure certificates or keys on router.
Re-encryption: is a variant of edge. First, a certificate is used for TSL termination on the router, then another certificate is used for encryption, and then it is sent to the backend pod. Therefore, the entire network path is encrypted.
Set up:
You can set it when you create a route, or you can modify its SSL termination by modifying the termination configuration item of the route.
Please refer to the official document https://docs.okd.io/latest/architecture/networking/routes.html#edge-termination for details.
(2) load balancing strategy. There are also three:
Roundrobin: take turns using all backends based on weight.
Leastconn: select the least connected backend to receive the request.
Source: hash the source IP to ensure that requests from the same source IP are sent to the same backend.
Set up:
To modify the load balancing policy of the entire router, you can use the ROUTER_TCP_BALANCE_SCHEME environment variable to set the load balancing policy for all passthrough type route of the router, and use ROUTER_LOAD_BALANCE_ALGORITHM to set the policy for other types of route.
You can use haproxy.router.openshift.io/balance to set a load balancing policy for a route.
For example:
Set the environment variable for the entire router: oc set env dc/router ROUTER_TCP_BALANCE_SCHEME=roundrobin. After the modification, the router instance will be redeployed, and all passthrough route will be of roundrobin type. The default is the source type.
Modify the load balancing policy of a route: oc edit route aaaa.svc.cluster.local. After the modification is completed, the balance value in the backend of the corresponding route in the HAProxy will be changed to leastconn.
2.7 one route distributes traffic to multiple back-end services
This feature is often used in some development and testing processes, such as doing Amax B testing.
In the following configuration, there is a deployment of three versions of the application, with a route at the front end, and each service uses different weights.
The following is the backend configuration in the HAProxy configuration file, which adopts roundrobin load balancing mode:
3. How can OpenShift router services achieve high availability?
OpenShift router services support two highly available modes.
3.1single router service with multiple replicas and high availability using and DNS/LB
This mode deploys only one router service, which supports all externally exposed services of the cluster. To implement HA, you need to set the number of copies (replicas) to be greater than 1 so that the pod is created on more than one server, and then polled by DNS or layer-4 load balancing.
Because HAProxy in router/pod implements local configuration files, they are actually stateful containers. OpenShift uses etcd as the unified storage of configuration. The openshift-router process should take some mechanism (notified or regularly pulled) to obtain the configuration of router and route from etcd, then modify the local configuration file, and then restart the HAPorxy process to apply the newly modified configuration file. For an in-depth understanding of how this works, take a look at the source code.
3.2 Multi-router services achieve high availability through sharding
In this mode, the administrator needs to create and deploy multiple router services, each of which supports one or more project/namespace. The mapping between router and project/namespace is implemented using label. For specific configuration, please refer to the official website https://docs.openshift.com/container-platform/3.11/install_config/router/default_haproxy_router.html. In fact, similar to the sharding feature of some products, such as mysql,memedcache, this feature is more about solving performance problems than fully solving high availability problems.
4. How to troubleshoot frequently asked questions?
As can be seen from the above analysis, for both router and route to work properly, at least make sure that the following links are all right:
The client accesses the service using the domain name and port configured in route.
DNS can resolve the domain name to the server where the target router is located (it is more complicated when using sharding configuration, especially if you need to pay attention).
If another four-layer load balancer is used, it must be configured correctly and work properly.
HAProxy can match to the correct backend through the domain name.
The configuration of router and route is correctly reflected in the configuration file of HAProxy.
The HAProxy process restarts, thus reading the newly modified configuration file.
The backend pod list is correct and at least one pod is working.
If you see the following error page, at least one of the above points 3 to 7 does not function properly. At this point, targeted investigation can be carried out.
Thank you for your reading. Welcome to follow my official Wechat account:
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 252
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.