In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "the deployment process of Istio1.4". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "the deployment process of Istio1.4".
Before deploying Istio, you first need to ensure that the Kubernetes cluster (the recommended version of kubernetes is above 1.13) has deployed and configured the local kubectl client.
1. Kubernetes environment preparation
To quickly prepare the kubernetes environment, we can deploy it using sealos, as follows:
prerequisite
Download the kubernetes offline installation package
Download the latest version of sealos
Be sure to synchronize server time
The hostname cannot be duplicated
Install kubernetes cluster $sealos init-- master 192.168.0.2\-- node 192.168.0.3\-- node 192.168.0.4\-- node 192.168.0.5\-- user root\-- passwd your-server-password\-- version v1.16.3\-- pkg-url / root/kube1.16.3.tar.gz
Check to see if the installation is normal:
$kubectl get nodeNAME STATUS ROLES AGE VERSIONsealos01 Ready master 18h v1.16.3sealos02 Ready 18h v1.16.3sealos03 Ready 18h v1.16.3sealos04 Ready 18h v1.16.32. Download the Istio deployment file
You can download istio from the release page of GitHub, or directly through the following command:
$curl-L https://istio.io/downloadIstio | sh-
When the download is complete, you will get an istio-1.4.2 directory that contains:
Install/kubernetes: installation files for the Kubernetes platform
Samples: sample application
Bin: istioctl binary file, which can be used to manually inject sidecar proxy
Enter the istio-1.4.2 directory.
$cd istio-1.4.2$ tree-L 1. /. / ├── bin ├── demo.yaml ├── install LICENSE ├── manifest.yaml ├── README.md ├── samples └── tools4 directories, 4 files
Copy istioctl to / usr/local/bin/:
$cp bin/istioctl / usr/local/bin/ enables the automatic completion function of istioctl bash
Copy the istioctl.bash from the tools directory to the $HOME directory:
$cp tools/istioctl.bash ~ /
Add a line to ~ / .bashrc:
Source ~ / istioctl.bash
The application takes effect:
$source ~ / .bashrczsh
Copy the _ istioctl from the tools directory to the $HOME directory:
$cp tools/_istioctl ~ /
Add a line to ~ / .zshrc:
Source ~ / _ istioctl
The application takes effect:
$source ~ /. Zshrc3. Deploy Istio
Istioctl provides a variety of installation configuration files, which can be viewed with the following command:
$istioctl profile listIstio configuration profiles: minimal remote sds default demo
The differences between them are as follows:
Defaultdemominimalsdsremote core components
Istio-citadelXX
XXistio-egressgateway
X
Istio-galleyXX
X
Istio-ingressgatewayXX
X
Istio-nodeagent
X
Istio-pilotXXXX
Istio-policyXX
X
Istio-sidecar-injectorXX
XXistio-telemetryXX
X
Additional component
Grafana
X
Istio-tracing
X
Kiali
X
PrometheusXX
X
Where the mark X indicates that the component should be installed.
If you just want to quickly try out and experience the full functionality, you can deploy it directly using the configuration file demo.
Before you formally deploy, you need to make two points:
Istio CNI Plugin
Currently, the default way to forward user pod traffic to proxy is to use the istio-init init container with privileged permission (run the script to write to iptables), which requires NET_ADMIN capabilities. Students who don't know anything about linux capabilities can refer to my Linux capabilities series.
The main design goal of the Istio CNI plug-in is to eliminate the init container of this privileged permission and replace it with an alternative that uses the Kubernetes CNI mechanism to implement the same function. The specific principle is to add the processing logic of Istio at the end of the Kubernetes CNI plug-in chain, and do network configuration for the pod of istio at these hook points where the pod is created and destroyed: write iptables, and let the network traffic of the network namespace where the pod is located be forwarded to the proxy process.
Please refer to the official documentation for details.
Using the Istio CNI plug-in to create sidecar iptables rules is certainly the mainstream way in the future, so let's try it now.
Kubernetes key plug-in (Critical Add-On Pods)
It is well known that the core components of Kubernetes run on the master node, but there are some additional components that are also critical to the entire cluster, such as DNS and metrics-server, which are called key plug-ins. Once the key plug-ins do not work properly, the whole cluster may not work properly, so Kubernetes uses PriorityClass to ensure the normal scheduling and operation of key plug-ins. To make an application a key plug-in to Kubernetes, all you need to do is set its priorityClassName to system-cluster-critical or system-node-critical, where system-node-critical has the highest priority.
> Note: key plug-ins can only be run in kube-system namespace!
For details, please refer to the official documentation.
Next, officially install istio with the following command:
Istioctl manifest apply-- set profile=demo\-- set cni.enabled=true-- set cni.components.cni.namespace=kube-system\-- set values.gateways.istio-ingressgateway.type=ClusterIP
Istioctl supports two types of API:
IstioControlPlane API
Helm API
In the above installation command, the cni parameter uses IstioControlPlane API, while values.* uses Helm API.
After the deployment is complete, view the status of each component:
$kubectl-n istio-system get podNAME READY STATUS RESTARTS AGEgrafana-6b65874977-8psph 1 * 1 Running 0 36sistio-citadel-86dcf4c6b-nklp5 1 * Running 0 38sistio-ingressgateway-6d759478d8-g5zz2 0 37sistio-telemetry-854d8556d5 1 Running 0 37sistio-pilot-5c4995d687-vf9c6 0 Running 1 Running 0 37sistio-policy-57b99968f-ssq28 1 Running 1 37sistio-sidecar-injector-746f7c7bbb-qwc8l 1 Running 0 37sistio-telemetry-854d8556d5-6znwb 1 Running 1 36sistio-tracing-c66d67cd9-gjnkl 1 + 1 Running 0 38skiali-8559969566-jrdpn 1 + 1 Running 0 36sprometheus-66c5887c86-vtbwb 1 + + 1 Running 039s $kubectl-n kube-system get pod-l k8s-app=istio-cni-nodeNAME READY STATUS RESTARTS AGEistio-cni-node-k8zfb 1/1 Running 0 10mistio-cni-node-kpwpc 1/1 Running 0 10mistio-cni-node-nvblg 1/1 Running 0 10mistio-cni-node-vk6jd 1/1 Running 0 10m
You can see that the cni plug-in has been successfully installed to see if the configuration has been appended to the end of the CNI plug-in chain:
$cat / etc/cni/net.d/10-calico.conflist {"name": "k8s-pod-network", "cniVersion": "0.3.1", "plugins": [. {"type": "istio-cni", "log_level": "info", "kubernetes": {"kubeconfig": "/ etc/cni/net.d/ZZZ-istio-cni-kubeconfig" "cni_bin_dir": "/ opt/cni/bin", "exclude_namespaces": ["istio-system"]}]}
By default, the istio cni plug-in monitors the Pod in all other namespace except istio-system namespace, but this does not meet our needs. A more rigorous approach is to let the istio CNI plug-in ignore at least two namespace, kube-system and istio-system.
It's also very simple. Remember the IstioControlPlane API mentioned earlier? You can overwrite the previous configuration directly through it, just create an IstioControlPlane CRD. For example:
$cat cni.yamlapiVersion: install.istio.io/v1alpha2kind: IstioControlPlanespec: cni: enabled: true components: namespace: kube-system values: cni: excludeNamespaces:-istio-system-kube-system-monitoring unvalidatedValues: cni: logLevel: info$ istioctl manifest apply-f cni.yaml
Delete all istio-cni-node Pod:
$kubectl-n kube-system delete pod-l k8s-app=istio-cni-node
Check the configuration of the CNI plug-in chain again:
$cat / etc/cni/net.d/10-calico.conflist {"name": "k8s-pod-network", "cniVersion": "0.3.1", "plugins": [. {"type": "istio-cni", "log_level": "info", "kubernetes": {"kubeconfig": "/ etc/cni/net.d/ZZZ-istio-cni-kubeconfig" "cni_bin_dir": "/ opt/cni/bin", "exclude_namespaces": ["istio-system", "kube-system", "monitoring"]}} 4. Expose Dashboard
There's nothing to say about this. Just expose it through Ingress Controller. You can refer to the deployment of Istio 1.0 that I wrote earlier. If you use Contour, you can refer to my other article: Contour Learning Notes (1): use Contour to take over the north-south traffic of Kubernetes.
Here I'll introduce a new way. Istioctl provides a subcommand to open various Dashboard locally:
$istioctl dashboard-- helpAccess to Istio web UIsUsage: istioctl dashboard [flags] istioctl dashboard [command] Aliases: dashboard, dash, dAvailable Commands: controlz Open ControlZ web UI envoy Open Envoy admin web UI grafana Open Grafana web UI jaeger Open Jaeger web UI kiali Open Kiali web UI prometheus Open Prometheus web UI zipkin Open Zipkin web UI
For example, to open a Grafana page locally, simply execute the following command:
$istioctl dashboard grafana http://localhost:36813
At first glance, I may feel that this feature is very chicken, my cluster is not deployed locally, and this command cannot specify the listening IP, so it cannot be opened locally with a browser. In fact, you can install kubectl and istioctl binaries locally, then connect to the cluster through kubeconfig, and then execute the above command locally to open the page. Is it convenient for developers to test? Windows users ignore what I said.
5. Expose Gateway
To expose Ingress Gateway, we can run in HostNetwork mode, but you will find that you cannot start Pod for ingressgateway, because if Pod sets HostNetwork=true, dnsPolicy will be forced to convert from ClusterFirst to Default. During the startup process of Ingress Gateway, you need to connect other components such as pilot through the DNS domain name, so you cannot start it.
We can solve this problem by forcing the value of dnsPolicy to be set to ClusterFirstWithHostNet. For more information, please refer to the Kubernetes DNS Advanced Guide.
The modified ingressgateway deployment configuration file is as follows:
ApiVersion: extensions/v1beta1kind: Deploymentmetadata: name: istio-ingressgateway namespace: istio-system... spec:... Template: metadata:... Spec: affinity: nodeAffinity:... RequiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms:-matchExpressions:-key: kubernetes.io/hostname operator: In values:-192.168.0.4 # suppose you want to schedule to this host. DnsPolicy: ClusterFirstWithHostNet hostNetwork: true restartPolicy: Always... Thank you for reading, the above is the content of "the deployment process of Istio1.4". After the study of this article, I believe you have a deeper understanding of the deployment process of Istio1.4, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.