In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
How to use Longhorn to manage cloud native distributed SQL database, many novices are not very clear about this. In order to help you solve this problem, the following editor will explain it in detail. People with this need can come and learn. I hope you can get something.
Longhorn is Kubernetes's cloud native distributed block storage, easy to deploy and upgrade, 100% open source and persistent. It was launched by Rancher Labs, the creator of the industry's most widely used Kubernetes management platform, and donated to CNCF in October last year. Longhorn's built-in incremental snapshot and backup capabilities ensure the security of volume data, while its intuitive UI makes it easy to manage scheduled backups of persistent volumes. With Longhorn, you can achieve the finest management granularity and maximum control, and you can easily create a disaster recovery volume in another Kubernetes and fail over in case of an emergency.
YugabyteDB is a cloud native distributed SQL database that can run in a Kubernetes environment, so it can interoperate with Longhorn and many other CNCF projects. YugabyteDB is an open source high-performance distributed SQL database, which is based on the extensibility and fault-tolerant design of Google Spanner. Yugabyte's SQL API (YSQL) is compatible with PostgreSQL.
If you are looking for a way to easily start application development on top of 100% cloud native infrastructure, this article is for you. We will show you step by step how to deploy a complete cloud native infrastructure stack consisting of Google Kubernetes Engine, Rancher enterprise Kubernetes management platform, Longhorn distributed block storage, and YugabyteDB distributed SQL database.
Why use Longhorn and YugabyteDB?
YugabyteDB is deployed on Kubernetes as StatefulSet and requires persistent storage. Longhorn can be used to back up YugabyteDB local disks, allowing the configuration of large persistent volumes. Using Longhorn in combination with YugabyteDB has the following benefits:
You don't have to manage local disks-- they are managed by Longhorn
Longhorn and YugabyteDB can be configured with large persistent volumes
Both Longhorn and YugabyteDB support multi-cloud deployment, which can help enterprises avoid cloud vendor lock-in
In addition, Longhorn can replicate synchronously within a geographic area. If YugabyteDB is deployed across regions and a node in any one of these areas fails, YugabyteDB can only rebuild that node with data from another area, which generates cross-area traffic. This results in cost changes and degrades the performance restored. Using Longhorn with YugabyteDB, you can rebuild the node seamlessly because Longhorn replicates locally in the area. This means that YugabyteDB ultimately does not have to copy data from another region, which reduces costs and improves performance. In this deployment setup, if the entire area fails, YugabyteDB only needs to perform a cross-area node rebuild.
Preparation in advance
We will run the YugabyteDB cluster on the Google Kubernetes cluster that has already used Longhorn:
YugabyteDB (using Helm Chart)-version 2.1.2 https://docs.yugabyte.com/latest/quick-start/install/macos/
Rancher (using Docker Run)-version 2.4
Longhorn (using Rancher UI)-version 0.8.0
A Google Cloud Platform account
Set up a K8S cluster and Rancher on Google cloud platform
Rancher is an open source enterprise Kubernetes management platform. It makes Run Kubernetes Everywhere easier and simpler, meets the needs of IT personnel, and enhances the capabilities of the DevOps team.
Rancher requires 64-bit Ubuntu16.04 or 18.04 Linux hosts with at least 4GB memory. In this example, we will set up the Google Kubernetes Engine (GKE) cluster using the Rancher UI installed on the GCP VM instance.
The steps required to set up a Kubernetes cluster on GCP using Rancher include:
Create a Service Account with the required IAM role in GCP
Create a VM instance running Ubuntu 18.04
Install Rancher on the VM instance
Generate a Service Account key
Set up a GKE cluster through Rancher UI
Create an instance of Service Account and VM
First, we need to create a Service Account attached to the GCP project. To do this, the access path is: [IAM & admin > Service accounts]
Select [Create New Service Account], name it, and click [Create].
Next, we need to add the required roles to Service Account so that we can set up the Kubernetes cluster using Rancher. Add the roles shown below and create a Service Account.
When the roles are added, click [Continue and Done].
Now we need to create an instance of Ubuntu VM that will be hosted on GCP. The execution path is: [Compute Engine > VM Instances > Create New Instance]
For the purpose of this demo, I chose the n1-standard-2 machine type. To select a Ubuntu image, you need to click [Boot Disk > Change] and select Ubuntu in the options of the operating system. Select Ubuntu 18.04 for version.
You need to check whether HTTPS traffic is allowed. Check that the path is [Firewall > Allow HTTPS traffic].
It only takes a few minutes to create a VM instance using the above settings. After the creation is complete, connect to the VM using SSH. With terminal connected to VM, the next step is to install Rancher by executing the following command:
$sudo docker run-d-restart=unless-stopped-p 80:80-p 443purl 443 rancher/rancher
Note: if Docker is not found, follow these instructions to install it on Ubuntu VM:
Https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04
Create a GKE cluster
To access Rancher server UI and create a login, open a browser and go to the IP address where you installed it.
For example: https:///login
Note: if you encounter any problems when trying to access Rancher UI, try using Chrome Incognito mode to load the page or disable browser caching.
Follow the prompts to create a new account.
After creating the account, go to https:///g/clusters. Then click Add Cluster to create a GKE cluster.
Select GKE and name the cluster.
Now we need to add the private key from the GCP Service Account we created earlier. It can be found under [IAM & admin > Service Accounts > Create Key].
This will generate a JSON file that contains the private key details.
Copy the contents of the JSON file to the Service Account section of Rancher UI and click [Next].
For the purpose of this tutorial, I chose the n1-standard-4 machine type, opened [Node Pool Autoscaling], and set the maximum number of nodes to 24. Click [Create].
Verify that the cluster has been created by ensuring that the status of the cluster is set to Active. Please be patient. It will take a few minutes.
You can also access the cluster from the GCP project by going to [Kubernetes Engine > Clusters].
Install Longhorn on GKE
After the Rancher installation is complete, we can use its UI to install and set up Longhorn on the GKE cluster.
Click the cluster, in this case longhorn-demo, and select [System].
Next, click [Apps > Launch], search for Longhorn, and click on the card.
Name deployment, or you can use the default name, and then click [Launch]. After the installation is complete, you can access Longhorn UI by clicking the / index.html link.
Verify that Longhorn is installed and that the GKE cluster node is visible.
Use Helm to install YugabyteDB on a GKE cluster
The next step is to install YugabyteDB on the GKE cluster. This can be done by performing the steps in the following link:
Https://docs.yugabyte.com/latest/deploy/kubernetes/helm-chart/
These steps are outlined below:
Verify and upgrade Helm
First, check that Helm is installed by using the Helm version command:
$helm versionClient: & version.Version {SemVer: "v2.14.1", GitCommit: "5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState: "clean"} Error: could not find tiller
If you encounter problems related to Tiller, such as the above error, you can initialize Helm with the upgrade option:
$helm init-- upgrade-- wait$HELM_HOME has been configured at / home/jimmy/.helm.Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.To prevent this, run `helm init` with the-- tiller-tls-verify flag.For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
According to the following documentation, you should be able to use Helm chart to install YugabyteDB:
Https://docs.yugabyte.com/latest/deploy/kubernetes/single-zone/oss/helm-chart/
Create a Service account
Before you can create a cluster, you need to have a service account, which should be granted the role of cluster administrator. Use the following command to create a yugabyte-helm service account and grant cluster-admin the cluster role.
Kubectl create-f https://raw.githubusercontent.com/yugabyte/charts/master/stable/yugabyte/yugabyte-rbac.yamlserviceaccount/yugabyte-helm createdclusterrolebinding.rbac.authorization.k8s.io/yugabyte-helm created initializes Helm$ helm init-- service-account yugabyte-helm-- upgrade-- wait$HELM_HOME has been configured at / home/jimmy/.helm.Tiller (the Helm server-side component) has been upgraded to the current version. Create a namespace $kubectl create namespace yb-demonamespace/yb-demo created to add chart image repository $helm repo add yugabytedb https://charts.yugabyte.com"yugabytedb" has been added to your repositories to get updates from the image repository $helm repo updateHang tight while we grab the latest from your chart repositories.Skip local chart repository...Successfully got an update from the "yugabytedb" chart repository...Successfully got an update from the "stable" chart repositoryUpdate Complete. Install YugabyteDB
We will use Helm chart to install YugabyteDB and will use Load Balancer to expose UI endpoints and YSQL, Yugabyte SQL API. In addition, we will use the Helm resource option in an underresourced environment and specify the Longhorn storage class. It will take a while and it will take patience. You can find detailed instructions on Helm in the documentation:
Https://docs.yugabyte.com/latest/deploy/kubernetes/gke/helm-chart/
$helm install yugabytedb/yugabyte-set resource.master.requests.cpu=0.1,resource.master.requests.memory=0.2Gi,resource.tserver.requests.cpu=0.1,resource.tserver.requests.memory=0.2Gi,storage.master.storageClass=longhorn,storage.tserver.storageClass=longhorn-namespace yb-demo-name yb-demo-timeout 1200-wait
Execute the following command to check the status of the YugabyteDB cluster:
$helm status yb-demo
You can also verify that all the components are installed and communicate by visiting the "Services & Ingress and Workloads" page of GKE.
You can also view the YugabyteDB installation in the administrative UI by accessing the endpoint of the yb-master-ui service on port 7000.
You can also log in to PostgreSQL-compatible shell by executing the following command:
Kubectl exec-n yb-demo-it yb-tserver-0 / home/yugabyte/bin/ysqlsh-h yb-tserver-0.yb-tservers.yb-demoysqlsh (11.2-YB-2.0.12.0-b0) Type "help" for help.yugabyte=#
Now you can start creating database objects and working with data.
Use Longhorn to manage YugabyteDB volume
Next, reload Longhorn dashboard to verify that YugabyteDB volume is set correctly. The number of Volume should now be visible:
Click [volume] to manage volume. Each volume should be visible.
You can now manage the volume by selecting them and the actions you need.
It is! Now you can run YugabyteDB on GKE with Longhorn as distributed block storage!
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.