Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Longhorn entry-level tutorial! Easy to achieve persistent storage!

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Jieshao

In this article you will learn how to use K3s to run Longhorn on Civo. If you haven't used Civo yet, you can register with the official website (https://www.civo.com/) and apply for a free quota. First, you need a Kubernetes cluster, and then we will install Longhorn and use an example to show how to use it.

One of the principles of cloud native applications is that they are designed to be stateless, so applications can be scaled directly horizontally. However, the reality is that unless your website or application takes up a small amount of memory, you must store these things somewhere.

Industry giants such as Google and Amazon often have custom systems for scalable storage solutions for our products. But what about small companies?

Rancher Labs (hereinafter referred to as Rancher), the creator of the most widely used Kubernetes management platform in the industry, released the containerized distributed storage project Longhorn (now donated to CNCF) in March 2018, which fills the above vacancy. In short, what Longhorn does is provide stable storage for Kubernetes Pod using the existing disks of the Kubernetes node.

Preparation in advance

Before we can use Longhorn, you need to have a running Kubernetes cluster. You can simply install a K3s cluster (https://github.com/rancher/k3s/blob/master/README.md) or if you are using Civo's Kubernetes service, you can also use it. This article will use Civo's Kubernetes service to create a cluster.

We recommend using a minimum number of Medium instances because we will test the state store of MySQL, which may take up a lot of RAM.

$civo K8s create longhorn-test-- waitBuilding new Kubernetes cluster longhorn-test:\ Created Kubernetes cluster longhorn-test

Your cluster needs to install open-iscsi on each node, so if you are not using civo's Kubernetes service, in addition to the instructions of the link above, you need to run the following command on each node:

Sudo apt-get install open-iscsi

Next, you need to download the Kubernetes configuration file and save it to ~ / .kube / config, and set the environment variable named KUBECONFIG to its file name:

Cd ~ / longhorn-playcivo k8s config longhorn-test > civo-longhorn-test-configexport KUBECONFIG=civo-longhorn-test-config install Longhorn

There are only two steps to install Longhorn on an existing Kubernetes cluster: install controller and the expansion pack for Longhorn, and then create a StorageClass that can be used for pod. Step one:

$kubectl apply-f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yamlnamespace/longhorn-system createdserviceaccount/longhorn-service-account created...

You need to use another command to create a StorageClass, but as an additional step, you can set the new class as the default, so you don't have to specify it every time:

$kubectl apply-f https://raw.githubusercontent.com/rancher/longhorn/master/examples/storageclass.yamlstorageclass.storage.k8s.io/longhorn created$ kubectl get storageclassNAME PROVISIONER AGElonghorn rancher.io/longhorn 3s $kubectl patch storageclass longhorn-p\'{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}} 'storageclass.storage.k8s.io/longhorn patched$ kubectl get storageclassNAME PROVISIONER AGElonghorn (default) rancher.io/longhorn 72s visits Longhorn Dashboard

Longhorn has a very concise Dashboard where you can see the used space, free space, volume list, and so on. But first, we need to create the details of authentication:

Htpasswd-c. / ing-auth admin$ kubectl create secret generic longhorn-auth\-- from-file ing-auth-- namespace=longhorn-system

Now we will create an Ingress object that can use the Traefik built into K3s and expose the dashboard to the outside. Create a file called longhorn-ingress.yaml and put it in it:

ApiVersion: extensions/v1beta1kind: Ingressmetadata: name: longhorn-ingress annotations: ingress.kubernetes.io/auth-type: "basic" ingress.kubernetes.io/auth-secret: "longhorn-auth" spec: rules:-host: longhorn-frontend.example.com http: paths:-backend: serviceName: longhorn-frontend servicePort: 80

Then apply it:

$kubectl apply-f longhorn-ingress.yaml-n longhorn-systemingress.extensions/longhorn-ingress created

Now you need to add an entry in the / etc/hosts file to point any of your Kubernetes IP addresses to longhorn-frontend.example.com:

Echo "1.2.3.4 longhorn-frontend.example.com" > > / etc/hosts

Now you can access http://longhorn-frontend.example.com on your browser, and after authenticating with admin and the password you entered when using htpasswd, you can see something similar to the following:

Install MySQL using persistent storage

There is no point in running MySQL in a single container, because when the underlying node (container) dies, the related business will not run, and you will lose customers and orders. Here, we will configure a new Longhorn persistent volume for it.

First, we need to create a few resources in Kubernetes. Each of these are yaml files in an empty directory, or you can put them all in one file and separate them with--.

A persistent volume in mysql/pv.yaml:

ApiVersion: v1kind: PersistentVolumemetadata: name: mysql-pv namespace: apps labels: name: mysql-data type: longhornspec: capacity: storage: 5G volumeMode: Filesystem storageClassName: longhorn accessModes:-ReadWriteOnce csi: driver: io.rancher.longhorn fsType: ext4 volumeAttributes: numberOfReplicates:'2' staleReplicaTimeout: '20' volumeHandle: mysql-data

Declaration of the volume in mysql / pv-claim.yaml (similar to an abstract request so that someone can use the volume):

ApiVersion: v1kind: PersistentVolumeClaimmetadata: name: mysql-pv-claim labels: type: longhorn app: examplespec: storageClassName: longhorn accessModes:-ReadWriteOnce resources: requests: storage: 5Gi

There is also a Pod in mysql/pod.yaml that can run MySQL and use the above volume life (note: we use password as the root password for MySQL here, but in practice you should use the secure password and store the password in Kubernetes secret instead of YAML, just for simplicity):

ApiVersion: apps/v1kind: Deploymentmetadata: name: my-mysql labels: app: examplespec: selector: matchLabels: app: example tier: mysql strategy: type: Recreate template: metadata: labels: app: example tier: mysql spec: containers:-image: mysql:5.6 name: mysql env:-name: MYSQL_ROOT_PASSWORD value: password Ports:-containerPort: 3306 name: mysql volumeMounts:-name: mysql-persistent-storage mountPath: / var/lib/mysql volumes:-name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim

Now, apply a folder or a single file (depending on your previous choice):

$kubectl apply-f mysql.yamlpersistentvolumeclaim/mysql-pv-claim createdpersistentvolume/mysql-pv createddeployment.apps/my-mysql created# orkubectl apply-f. / mysql/persistentvolumeclaim/mysql-pv-claim createdpersistentvolume/mysql-pv createddeployment.apps/my-mysql created tests whether MySQL can persist storage

Our test is simple: create a new database, delete the container (Kubernetes will re-create it for us), then reconnect, and ideally you can still see our new database.

OK, now let's create a database called should_still_be_here:

$kubectl get pods | grep mysqlmy-mysql-d59b9487b-7g644 1 row affected 1 Running 0 2m28s $kubectl exec-it my-mysql-d59b9487b-7g644 / bin/bashroot@my-mysql-d59b9487b-7g644:/# mysql- u root-p mysqlEnter password: mysql > create database should_still_be_here;Query OK, 1 row affected (0.00 sec) mysql > show databases +-- + | Database | +-- + | information_schema | | # mysql50#lost+found | | mysql | | performance_schema | | should_still_be_here | +-+ 5 rows in set (0.00 sec) mysql > ExitByeroot@my-mysql-d59b9487b-7g644:/# exitexit

Now we will delete the container:

Kubectl delete pod my-mysql-d59b9487b-7g644

In about a minute, we will look for a new container name again, connect to that container name, and see if our database still exists:

$kubectl get pods | grep mysqlmy-mysql-d59b9487b-8zsn2 1 p mysqlEnter password 1 Running 0 84s $kubectl exec-it my-mysql-d59b9487b-8zsn2 / bin/bashroot@my-mysql-d59b9487b-8zsn2:/# mysql- u root-p mysqlEnter password: mysql > show databases +-- + | Database | +-- + | information_schema | | # mysql50#lost+found | | mysql | | performance_schema | | should_still_be_here | +-+ 5 rows in set (0.00 sec) mysql > ExitByeroot@my-mysql-d59b9487b-7g644:/# exitexit

A complete success! Our storage is persisted in various containers that have been killed.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report