In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces how to use Rancher 2.0 to deploy Istio on the Kubernetes cluster, the content is very detailed, interested friends can refer to, hope to be helpful to you.
Service mesh aims to solve the problem of service topology connectivity between cloud native applications. If you want to build cloud native applications, you need Service mesh. Istio is a star project in Service mesh, and there is a very comprehensive introduction to it in Istio documentation: https://istio.io/docs/concepts/what-is-istio/. Istio, based on Envoy Proxy, is a very promising Service mesh solution, which is jointly developed by a number of technology giants.
Currently, Istio is best suited for Kubernetes, but other platforms will be supported in the future. Therefore, in order to deploy Istio and demonstrate its functionality, you first need a Kubernetes cluster. Once this condition is met, it will be very easy to use Rancher 2.0.
Preparation in advance
In order to successfully demo, you need to make the following preparations:
A Google Cloud account, free of charge.
A Ubuntu 16.04instance (this will be the environment in which the Rancher instance will run)
A Kubernetes cluster deployed on Google Cloud Platform and using GKE services. This demo uses the 1.10.5-gke.2 version.
Istio 0.8.0 (this is the version used at the time of writing this article, and now Istio 1.0 has been released)
In general, the steps in this tutorial also apply to newer versions.
Start Rancher 2.0
First, start an instance of Rancher 2.0. For information about how to start Rancher 2.0, you can refer to the introductory tutorial on Rancher's official website, which is very concise and intuitive (https://rancher.com/quick-start/). The necessary steps are also listed below.
The example in this article will use Google Cloud Platform, so we first start an instance of Ubuntu and use Console or CLI (https://cloud.google.com/compute/docs/instances/create-start-instance). Communicate with it using HTTP and HTTPs protocols. The commands to achieve this are as follows:
Make sure that the Rancher instance has at least 1 vCPU and approximately 4GB RAM available.
Next, log in to the Ubuntu instance through ssh and install Docker (https://docs.docker.com/install/linux/docker-ce/ubuntu/). After the Docker installation is complete, you can start Rancher to verify that it is running.
Get the public IP address of the Ubuntu instance and access it with your browser.
Then the page will jump back to Rancher's HTTPs protocol page, and you will see a warning from the browser. Because Rancher uses a self-signed certificate. Ignore these warnings, because you've already started the instance (never do this on an untrusted site), and then start setting the administrator password and server URL to start Rancher 2.0. Now you can start the Kubernetes cluster.
Start a Kubernetes cluster
First, you need a Google cloud service account with the following roles attached: Compute Viewer,Kubernetes Engine Admin, Service Account User, Project Viewer. Next, you need to generate a service account key. For specific steps, please refer to: https://cloud.google.com/iam/docs/creating-managing-service-account-keys
Now you can use your service account key to start a Kubernetes cluster using Rancher 2.0 (it is safe to use the default Compute Engine service account):
Pay attention to the @ developer.gserviceroomt.com value, which you'll use later.
Now you are ready to start the cluster. Open the panel of Rancher and click [add Cluster / Add Cluster]. You need to do the following:
When selecting a Kubernetes managed service provider, select GCE
Give your cluster a name, such as rancher-demo
Export or copy and paste the service key details from the key.json file generated in the above steps into the Service Account field.
Then go to the "configure Node / Configure Nodes" option and select as follows:
For the choice of Kubernetes version, you can choose the latest version, but this test is conducted on the 1.10.5-gke.2 version.
The choice of region, choose the area closest to you
The choice of machine type requires at least n1-standard-1
As for the number of nodes, for Istio Demo, at least 4 nodes are required.
When the above settings are complete, your settings page is shown in the following figure:
After a few minutes, you can see that the cluster is active on the Rancher panel. Remember the @ developer.gserviceroomt.com value mentioned above? Now it will come in handy. You need to use it to grant the current user cluster administrator privileges (administrator privileges are required to create the necessary RBAC rules for Istio). To do this, you need to click the cluster name of rancher-demo on the Rancher panel, and then go to the cluster panel of rancher-demo.
Now start kubectl, which will open the kubectl command line for this particular cluster. You can also export Kubeconfig files for use with locally installed kubectl. For the purpose of this example, use the command line provided by Rancher. After you open the command line, run the following command:
Deploy Istio on Rancher
Istio has a Helm package that Rancher can use to install Istio. To get the official Istio Helm package, you need to add the Istio library to the Rancher application directory. To do this, first visit Rancher Global View, enter the directory option, and select [add Directory], fill in the name istio-github, the directory URL is https://github.com/istio/istio.git (Rancher can handle anything git clone handles), in the Branch section, you can write the branch name and set it to master. After the settings are completed, the screenshot should be shown as follows:
Click [create / Create].
In this step, you will begin to deploy Istio using Rancher Catalog. First, access the default project of the rancher-demo cluster and select the directory application. When you click [launch / Launch], you will see many apps available by default. Since this demo is about Istio, select the istio-github directory in [all directories / All Catalogs], the one you just created. This will give you two options: istio and istio-remote. Select istio and click "View details / View Details", you will see the option to deploy Istio, select as follows:
Set the name to istio-demo
Keep the template version at 0.8.0
The default istio namespace is istio-system, so the namespace is set to istio-system here
By default, Istio does not encrypt access between components, but encryption is important, so we need to add encryption
Istio's helm chart does not add Grafana by default, and we should add it, too.
Click Add Answer and set the values of global.controlPlaneSecurityEnabled and grafana.enabled to true. The above functions can be added.
After you have done this, the interface should look like the following figure:
Click [launch / Launch].
If you see the workload label now, you should be able to see all the components of Istio running in your cluster, and make sure all the workloads are green. In addition, you need to check the load balancer tag. Both istio-ingress and istio-ingressgateway should be active.
If istio-ingressgateway is Pending, then you need to apply for the istio-ingressgateway service again. The specific steps are as follows: click Import Yaml; for Import Mode, select "Cluster: import any resources directly into this cluster" Cluster: Direct import of any resources into this cluster; copy / paste the istio-demo-ingressgateway.yaml service into the Import Yaml editor, and click Import:
This step will solve the problem of istio-ingressgateway pending state.
Now, you need to check on the Rancher panel that all Istio workloads, load balancing, and service discovery are in good condition.
One last thing to add: add an istio-injected tag to your default namespace, and the Istio sidecar container will automatically inject your node and run the kubectl command below (as mentioned above, you can start kubectl from within Rancher).
This tag will cause Istio-Sidecar-Injector to automatically inject the Envoy container into your application node.
Deploy the Bookinfo sample application
Now you can deploy a test application and test the power of Istio. First, deploy the Bookinfo sample application. The interesting part of this application is that it has three versions of reviews running at the same time. We can experience some of the features of Istio in these three versions of the program. Next, access the workload in rancher-demo 's default project to deploy Bookinfo app, as follows:
Click Import Yaml; to download bookinfo.yaml (https://info.rancher.com/hubfs/bookinfo.yaml) to local
When you enter the Import Yaml menu, upload it to Rancher by reading from the file
For Import Mode, select [Cluster: import any resources directly into this cluster] Cluster: Direct import of any resources into this cluster
Click [Import / Import].
This should add 6 workloads to your rancher-demo Default project. As shown below:
Now, to expose Bookinfo app through Istio, you need to apply this bookinfo-gateway.yaml (https://info.rancher.com/hubfs/bookinfo-gateway.yaml)) in the same way as bookinfo.yaml. At this point, you can use your browser to access bookinfo app. You can obtain the external IP address of the istio-ingressgateway load balancer in two ways:
First, get it from Rancher. Access the load balancer and select View in API from the menu bar on the right-hand side. It will open a new browser page, search for publicEndpoints-> addresses there, and you can see the public IP address.
Second, obtain the following through kubectl:
Use your browser to visit: http://${INGRESS_HOST}/productpage, then you should see Bookinfo app. When refreshing the page several times, you should see that there are three different versions of the Book Reviews section: the first version has no stars, the second version has black stars, and the third version has red stars.
With Istio, you can restrict your application from routing only to the first version of the application. The specific action is: import route-rule-all-v1.yaml (https://info.rancher.com/hubfs/route-rule-all-v1.yaml) to Rancher, then refresh the page a few seconds later, you will not see any stars on reviews.
In addition, you can route traffic to only one group of users. After you import route-rule-reviews-test-v2.yaml to Rancher, log in to Bookinfo app using the user name jason (no password required), and you should only see the reviews of version 2 (that is, the version with black stars). But after logging out, you can only see the app of the version of 1reviews.
At this point, you have realized the power of Istio. Of course, that's not all. Istio has many other features. After you create this setting, you can complete the tasks in the Istio document.
Remote sensing of Istio
Now is the time to learn more about another, more useful feature of Istio: metrics are provided by default.
Let's start with Grafana. When we deploy Istio, the grafana.enabled with the value set to true creates an instance of grafana and is configured to collect metrics for Istio to display them in several panels. By default, Grafana's service is not publicly displayed, so to view metrics, you first need to expose Grafana's service to a public IP address. Of course, there is another option to expose the service: NodePort (https://kubernetes.io/docs/concepts/services-networking/service/#nodeport), but this requires you to open Nodeport on all nodes of the Google Cloud Platform firewall, which has more than one task, so exposing the service through a public IP address is easier.
To do this, access the workload in the default project of rancher-demo and select the Service Discovery tag. When all the work on the cluster is done, there should be five services in the default namespace, 12 services in the istio-system namespace, and all of these services are active. Next, select the grafana service and select View/Edit YAML from the menu bar on the right.
Find the line that contains type: ClusterIP, change it to type: LoadBalancer, and click [Save / Save]. It should then start configuring the load balancer in Google Cloud Platform and expose Grafana on its default port 3000. If you want to obtain the public IP address of Grafana, simply repeat the steps in the bookinfo example to obtain the IP address, that is, view the grafana service in API, where you can find the IP address or obtain it through kubectl:
Visit http://${GRAFANA_HOST}:3000/ from your browser and select one of the panels, such as Istio Service. With the configuration of the previous application, we limited the traffic to show only the version 1 reveiws application. Select reviews.default.svc.cluster.local from the drop-down menu of the service and you can view it from the chart. Now use the following command to generate some traffic from the kubectl of Rancher:
You will need to wait about 5 minutes, and the traffic generated for Grafana will be displayed on the following panel:
If you scroll the panel, you will see a diagram of Incoming Requests by Destination And Response Code under SERVICE WORKLOADS, which requires the Reviews application to end only at the v1 endpoint. If you use the following command, generate a request for the version 2 application (remember that user jason can access the version 2 reviews application):
You should also see the request displayed on the version 2 application:
In the same way, it is possible to expose and see other default metrics of Istio, such as Prometheus, Tracing, and ServiceGraph.
So much for sharing about how to use Rancher 2.0 to deploy Istio on a Kubernetes cluster. I hope the above can be of some help and learn more. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.