In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
This blog istio related introduction and test cases come from the network, here combined with their own configuration to organize.
Istio introduction
Official Chinese reference document
Official English reference document
The term service grid (Service Mesh) is commonly used to describe the micro-service networks that make up these applications and the interactions between applications. With the growth of scale and complexity, service grid becomes more and more difficult to understand and manage. Its requirements include service discovery, load balancing, fault recovery, metrics collection and monitoring, and often more complex operation and maintenance requirements, such as Aash B testing, canary release, current limiting, access control, and end-to-end authentication.
Istio provides a complete solution to meet the diverse needs of micro-service applications by providing behavioral insight and operational control for the entire service grid.
The goal of using istio
Istio provides a simple way to build a network for deployed services, which has the functions of load balancing, inter-service authentication, monitoring, etc., and requires only a little or no change to the code of the service. To make the service support Istio, simply deploy a special sidecar agent in your environment and use the Istio control plane feature to configure and manage the agent to intercept all network traffic between microservices:
Automatic load balancing of HTTP, gRPC, WebSocket, and TCP traffic.
Through rich routing rules, retry, failover and fault injection, traffic behavior can be finely controlled.
Pluggable policy layer and configuration API to support access control, rate limiting, and quotas.
Automatic metrics, logging and tracking of all traffic in and out of the cluster entrances and exits.
Secure communication between services in a cluster is achieved through powerful identity-based authentication and authorization.
Istio is designed to be scalable to meet a variety of deployment needs.
Core functions of istio
Traffic management: through simple rule configuration and traffic routing, you can control traffic and API calls between services.
Communication security: Istio's security features allow developers to focus on application-level security. Istio provides the underlying secure communication channel and manages the authentication, authorization and encryption of service communications on a large scale.
Monitoring capabilities: through Istio's monitoring capabilities, you can truly understand how service performance affects upstream and downstream functions, while its custom dashboard provides visibility into the performance of all services and allows you to understand how that performance affects your other processes.
Cross-platform: Istio is platform independent and is designed to run in a variety of environments, including cross-cloud, on-premises, Kubernetes, Mesos, etc.
Integration and customization: policy enforcement components can be extended and customized to integrate with existing ACL, logging, monitoring, quotas, auditing, and so on.
System architecture
Istio service grid is logically divided into data plane and control plane.
The data plane consists of a set of intelligent agents (Envoy) deployed as sidecar. These agents can regulate and control all network communication between microservices and Mixer.
The control plane is responsible for managing and configuring agents to route traffic. In addition, the control plane configures Mixer to implement policies and collect telemetry data.
Component introduction
Istio uses an extended version of the Envoy agent, Envoy, a high-performance proxy developed with C++ to mediate all inbound and outbound traffic for all services in the service grid. Many of the built-in features of Envoy are carried forward by istio, such as:
Dynamic service discovery load balancer TLS terminates HTTP/2 & gRPC proxy fuse health check, grayscale release fault injection rich metrics based on percentage traffic split
Envoy is deployed as sidecar, in the same Kubernetes pod as the corresponding service.
Mixer is a platform-independent component that enforces access control and usage policies on the service grid and collects telemetry data from Envoy agents and other services. The agent extracts the request-level attributes and sends them to Mixer for evaluation.
Pilot provides service discovery capabilities for Envoy sidecar and traffic management capabilities for intelligent routing (such as A hand B testing, canary deployment, etc.) and resiliency (timeouts, retries, fuses, etc.). It translates advanced routing rules that control traffic behavior into Envoy-specific configurations and propagates them to sidecar at run time.
Citadel provides powerful inter-service and end-user authentication through built-in identity and credential management. It can be used to upgrade unencrypted traffic in the service grid and provide operators with the ability to enforce policies based on service identification rather than network control. Starting with version 0.5, Istio supports role-based access control to control who can access your service.
Galley represents other Istio control plane components that are used to validate user-written Istio API configurations. Over time, Galley will take over the top-level responsibility of Istio for acquiring configuration, processing, and allocating components. It will be responsible for isolating other Istio components from the details of getting user configuration from the underlying platform, such as Kubernetes.
Istio deployment
Tip: running in Minikube requires at least 8 gigabytes of memory and 4 cores of CPU.
Reference document: https://thenewstack.io/tutorial-blue-green-deployments-with-kubernetes-and-istio
0. Confirm the K8S environment configuration
If you want to successfully deploy istio, you need to add the parameter MutatingAdmissionWebhook,ValidatingAdmissionWebhook to kube-apiserver. The kube-apiserver configuration for successfully running istio is as follows:
[Unit] Description=Kubernetes API ServerDocumentation= https://github.com/GoogleCloudPlatform/kubernetesAfter=network.target[Service]ExecStart=/opt/kubernetes/bin/kube-apiserver\-enable-admission-plugins=MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction\-bind-address=192.168.20.31\-insecure-bind-address=127.0.0.1\-authorization-mode=Node RBAC\-- runtime-config=rbac.authorization.k8s.io/v1\-- kubelet-https=true\-- anonymous-auth=false\-- basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv\-- enable-bootstrap-token-auth\-- token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv\-- service-cluster-ip-range=10.1.0.0/16\-- service-node-port-range=20000-40000\-tls -cert-file=/opt/kubernetes/ssl/kubernetes.pem\-tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem\-client-ca-file=/opt/kubernetes/ssl/ca.pem\-service-account-key-file=/opt/kubernetes/ssl/ca-key.pem\-requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem\-requestheader-allowed-names=\-requestheader-extra-headers-prefix= "requestheader-group-headers=X-Remote-Group requestheader-group-headers=X-Remote-Group Extrac -"-- requestheader-group-headers=X-Remote-Group\-- requestheader-username-headers=X-Remote-User\-- proxy-client-cert-file=/opt/kubernetes/ssl/metrics-server.pem\-- proxy-client-key-file=/opt/kubernetes/ssl/metrics-server-key.pem\-- enable-aggregator-routing=true\-- runtime-config=api/all=true\-- etcd-servers= http://192.168.20.31:2379, Http://192.168.20.32:2379,http://192.168.20.33:2379\-- enable-swagger-ui=true\-- allow-privileged=true\-- audit-log-maxage=30\-- audit-log-maxbackup=3\-- audit-log-maxsize=100\-- audit-log-path=/opt/kubernetes/log/api-audit.log\-- event-ttl=1h\-- vault 2\-- logtostderr=false\-- log-dir=/opt/kubernetes/logRestart=on-failureRestartSec=5Type=notifyLimitNOFILE=65536 [Install] WantedBy=multi-user.target
Verify that Metrics Server is successfully installed and running properly:
# kubectl get apiservices | grep metrics-serverv1beta1.metrics.k8s.io kube-system/metrics-server True 5d# kubectl get apiservices v1beta1.metrics.k8s.io-o yamlapiVersion: apiregistration.k8s.io/v1kind: APIServicemetadata: creationTimestamp: 2018-11-07T06:23:17Z name: v1beta1.metrics.k8s.io resourceVersion: "747856" selfLink: / apis/apiregistration.k8s.io/v1/apiservices/v1beta1.metrics.k8s.io uid: 9d638462-e255-11e8-a817 -000c29550cccspec: group: metrics.k8s.io groupPriorityMinimum: 100insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100status: conditions:-lastTransitionTime: 2018-11-12T09:21:46Z message: all checks passed reason: Passed status: "True" type: Available# kubectl top node # k8s 1.12 version will use metrics apiNAME CPU (cores) CPU% MEMORY (bytes) by default MEMORY% 192.168.20.32 580m 28% 2263Mi 61% 192.168.20.33 381m 19% 2317Mi 63% 1. Download istio
Execute the following command to download the installation package and extract it automatically:
Curl-L https://git.io/getLatestIstio | sh-
Enter the Istio package directory. For example, suppose the package is istio-1.1.0.0:
Cd istio-1.1.0
The installation directory contains:
The install/ directory contains the .yaml files required for the Kubernetes installation. The samples/ directory is the example application istioctl client files are saved in the bin/ directory. The function of istioctl is to inject Envoy Sidecar manually and manage istio.VERSION configuration files for routing rules and policies.
Adding the bin directory to the system's environment variable, or copying the bin/istioctl file to / usr/bin, makes it easier to use the istioctl command:
Export PATH=$PWD/bin:$PATH2. Install istio
Install the custom resource (CRD) for istio:
Kubectl apply-f install/kubernetes/helm/istio/templates/crds.yaml
Since istio is installed in an environment without Rancher and cloud providers, and there is no support for LoadBalancer, we need to change the LoadBalancer keyword in istio-1.0.3/install/kubernetes/istio-demo.yam to NodePort:
ApiVersion: v1kind: Servicemetadata: name: istio-ingressgateway namespace: istio-system annotations: labels: chart: gateways-1.0.3 release: istio heritage: Tiller app: istio-ingressgateway istio: ingressgatewayspec: type: LoadBalancer # modify LoadBalancer here to NodePort selector: app: istio-ingressgateway istio: ingressgateway ports:-name: http2 nodePort: 31380 port: 80 targetPort: 80- Name: https nodePort: 31390 port: 443
Install Istio that does not enable bi-directional TLS authentication between Sidecar with the following command:
Kubectl apply-f install/kubernetes/istio-demo.yaml
After executing this file, we will find that several services and pod have been installed, and the namespace of istio-system has been created:
[root@k8s-node-1] # kubectl get pod-n istio-systemNAME READY STATUS RESTARTS AGEgrafana-546d9997bb-2z8nj 1 + 1 Running 0 4h5mistio-citadel-6955bc9cb7-b4jm4 1 + + 1 Running 0 4h5mistio-cleanup-secrets-9rm4c 0 + + 1 Completed 0 4h5mistio-egressgateway-7dc5cbbc56-gccmr 1/1 Running 0 4h5mistio-galley-545b6b8f5b-pwckj 1/1 Running 0 3h8mistio-grafana-post-install-stm7q 0/1 Completed 0 4h5mistio-ingressgateway-7958d776b5-kf4xf 1/1 Running 0 4h5mistio-pilot-567dd97ddc-mnmhg 2/2 Running 0 4h5mistio -policy-5c689f446f-82lbn 2 + 2 Running 0 4h5mistio-policy-5c689f446f-jl4lh 2 + 2 Running 1 102mistio-policy-5c689f446f-s4924 2 + 2 + 2 Running 0 3m14sistio-security-post-install-cgqtr 0 + 0 Completed 0 4h5mistio-sidecar-injector-99b476b7b-4twb2 1 + 1 Running 0 4h5mistio-telemetry-55d68b5dfb-ftlbl 2/2 Running 0 4h5mistio-telemetry-55d68b5dfb-l8xk6 2/2 Running 0 3h7mistio-telemetry-55d68b5dfb-t5kdz 2/2 Running 1 3h8mistio-telemetry-55d68b5dfb-zgljm 2/2 Running 0 3h8mistio-telemetry-55d68b5dfb-zxg7q 2/2 Running 1 3h8mistio-tracing-6445d6dbbf-92876 1 to 1 Running 0 4h5mprometheus-65d6f6b6c-bpk8q 1 to 1 Running 1 4h5mservicegraph-57c8cbc56f-f92rs 1 to 1 Running 17 4h5m [root@k8s-node-1] # kubectl get svc-n istio-systemNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S ) AGEgrafana NodePort 10.1.157.231 3000:28271/TCP 4h7mistio-citadel ClusterIP 10.1.103.109 8060/TCP 9093/TCP 4h7mistio-egressgateway ClusterIP 10.1.171.122 80/TCP 443/TCP 4h7mistio-galley ClusterIP 10.1.60.32 443/TCP 9093/TCP 4h7mistio-ingressgateway NodePort 10.1.105.144 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:39182/TCP,8060:21878/TCP,853:35905/TCP,15030:22762/TCP 15031:20989/TCP 4h7mistio-pilot ClusterIP 10.1.28.6 15010/TCP,15011/TCP,8080/TCP,9093/TCP 4h7mistio-policy ClusterIP 10.1.208.196 9091/TCP,15004/TCP 9093/TCP 4h7mistio-sidecar-injector ClusterIP 10.1.31.204 443/TCP 4h7mistio-telemetry ClusterIP 10.1.178.158 9091/TCP 15004/TCP,9093/TCP,42422/TCP 4h7mjaeger-agent ClusterIP None 5775/UDP,6831/UDP 6832/UDP 4h7mjaeger-collector ClusterIP 10.1.63.111 14267/TCP 14268/TCP 4h7mjaeger-query ClusterIP 10.1.235.64 16686/TCP 4h7mprometheus NodePort 10.1.235.55 9090:28729/TCP 4h7mservicegraph ClusterIP 10.1.30.255 8088/TCP 4h7mtracing ClusterIP 10.1.114.120 80/TCP 4h7mzipkin ClusterIP 10.1.212.242 9411/TCP 4h7m3. Tag Namespace
In order to monitor and handle traffic for each namespace in K8s, you need to inject Envoy container into each namespace (make sure that Istio-sidecar-injector starts normally):
Kubectl label namespace istio-injection=enabled EG: kubectl label namespace default istio-injection=enabled
If you want to unmark, use the following command:
Kubectl label namespace default istio-injection-
Understand the principle:
Istio provides the default configuration. Select pod in the namespace with the istio-injection=enabled tag, and add a container of istio-proxy. Use the following command to edit the scope of the target namespace:
Kubectl edit mutatingwebhookconfiguration istio-sidecar-injector
The ConfigMap istio-sidecar-injector in the istio-system namespace contains the default injection strategy and the injection template for Sidecar.
There are two injection strategies:
Disabled: the Sidecar injector does not inject into Pod by default. Add sidecar.istio.io/inject annotations to the Pod template and assign a value of true to enable injection.
Enabled: the Sidecar injector injects Pod by default. Adding sidecar.istio.io/inject annotations to the Pod template and assigning it to false will prevent the injection of this Pod.
4. View related services
Here, use the jsonpath tool to view the external service port of istio-ingressgateway:
Kubectl-n istio-system get service istio-ingressgateway-o jsonpath=' {.spec.ports [? (@ .name = = "http2")] .nodePort}'
Check the external mapping port of grafana. The service on this port shows the monitoring data of the current cluster:
Kubectl-n istio-system get service grafana-o jsonpath=' {.spec.ports [? (@ .name = = "http")] .nodePort}'
View the port of the prometheus service:
Kubectl-n istio-system get service prometheus-o yaml | grep nodePort
All external ports with UI interface can be accessed normally by using ip: port through the browser.
If the data is not displayed normally in the default grafana template, you can check whether the clocks of each node are synchronized, and execute the request in the URL of the web page in prometheus to see if the data can be obtained.
You can test istio according to the official bookinfo example: https://istio.io/docs/examples/bookinfo/
Create a service application by deploying two versions of an application in a shunting way
Here we use a sample file to deploy two different versions of the nginx service, offloading the two versions. Create a myapp.yaml:
ApiVersion: v1kind: Servicemetadata: name: myapp labels: app: myappspec: ClusterIP ports:-port: 80 name: http selector: app: myapp---apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: myapp-v1spec: replicas: 1 template: metadata: labels: app: myapp version: v1spec: containers:-name: myapp image: janakiramm/myapp:v1 imagePullPolicy: IfNotPresent Ports:-containerPort: 80---apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: myapp-v2spec: replicas: 1 template: metadata: labels: app: myapp version: v2spec: containers:-name: myapp image: janakiramm/myapp:v2 imagePullPolicy: IfNotPresent ports:-containerPort: 80
In this file, simple Nginx-based Docker images are built as application cases: janakiramm/myapp:v1 and janakiramm/myapp:v2. After deployment, the two versions of Nginx will display static pages with a blue or green background, respectively.
Execute:
Kubectl apply-f myapp.yaml
After successful creation, use the port-forward command to map out the local port and test whether the local port can be accessed successfully:
Kubectl port-forward deployment/myapp-v1 8081:80curl 127.0.0.1 8081 create Gateway
Here, the gateway (Gateway), destination rule (DestinationRule) and virtual service (VirtualService) are created into the app-gateway.yaml file:
ApiVersion: networking.istio.io/v1alpha3kind: Gatewaymetadata: name: app-gatewayspec: selector: istio: ingressgateway servers:-port: number: 80 name: http protocol: HTTP hosts:-"*"-- apiVersion: networking.istio.io/v1alpha3kind: DestinationRulemetadata: name: myappspec: host: myapp subsets:-name: v1 labels: version: v1-name: v2 labels: version: v2murf- ApiVersion: networking.istio.io/v1alpha3kind: VirtualServicemetadata: name: myappspec: hosts: "*" gateways:-app-gateway http:-route:-destination: host: myapp subset: v1 weight: 50-destination: host: myapp subset: v2 weight: 50-Istio gateway describes how the load balancer at the grid boundary handles incoming and outgoing HTTP/TCP connections. It focuses on which ports should be exposed, which protocols are available, and the SNI configuration of the load balancer. It points to the Ingress gateway by default. Istio destination rules define the rules for accessing the service after traffic is routed. The virtual service defines the routing rules for a series of traffic when the host gets the address. Each routing rule defines a matching rule for traffic based on a particular protocol. When a traffic is matched, based on the version, it is sent to the relevant target service.
Execute:
Kubectl apply-f app-gateway.yaml
You can view this service through the port of istio-ingressgateway:
Kubectl-n istio-system get service istio-ingressgateway-o jsonpath=' {.spec.ports [? (@ .name = = "http2")] .nodePort}'
By accessing this port, you can access the application service directly, and you can find that when we keep making requests, we will poll v1 and v2 according to the weight set before.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.