Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to create product quality database settings with Rancher

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Xiaobian to share with you how to use Rancher to create product quality database settings, I believe most people still do not know how, so share this article for everyone's reference, I hope you have a lot of harvest after reading this article, let's go to understand it together!

Databases are critical to business, and both data loss and exposure pose serious risks to the enterprise. Operational errors or architectural failures can result in significant events and loss of resources, requiring failover systems/processes to reduce the potential for data loss. Before migrating a database architecture to Kubernetes, a cost-benefit analysis comparing running a database cluster on a container architecture versus bare metal must be completed, which includes assessing disaster recovery requirements for Recovery Time Objective (RTO) and Recovery Point Objective (RPO). These analytics are critical for data-sensitive applications, especially when applications need to be truly high availability, geographically separated for large scale and redundancy, and low latency for application recovery.

In the following steps, we will analyze the various options available in Rancher High Availability and Kubernetes to help you design a product quality database.

A. Weaknesses of stateful system container architecture

Containers deployed in Kubernetes-like clusters are naturally stateless and ephemeral, meaning that they do not maintain fixed identities and data loss and forgetting can occur when errors or restarts occur. When designing distributed database environments, the need to provide high availability and fault tolerance challenges Kubernetes 'stateless architecture because replication and scaling require maintaining the following states:

(1) Storage;(2) Identity;(3) Session;(4) Cluster Role.

Considering our containerized applications, we immediately saw the challenges of stateless architecture, and our applications needed to meet a number of requirements:

Our database needs to store data and transactions in files that are persistent and unique to each database container;

Each container in a database application needs to maintain a fixed identity as a database node so that we can route traffic to it by name, address, or index;

Database client sessions are required to maintain state, and for consistency, ensure that read-write transactions terminate before state changes occur and that state transitions are unaffected by persistence failures.

Each database node needs to have persistent roles in its database cluster, such as host, replica, or shard, unless they are changed by application-specific events or because the schema has changed.

The current solution to these challenges may be to attach PersistentVolume to our Kubernetes pods, with its lifecycle independent of any pod that uses it. However, PersistentVolume does not provide consistent role assignments to cluster nodes (that is, parent, child, or seed nodes). Clustering is not guaranteed to maintain database state throughout the application lifecycle, specifically, new containers are created with non-deterministic random names, and pods can be set to start, terminate, or scale in any order at any time. So our challenge remains.

B.K8s Advantages of Deploying Distributed Databases

With so many challenges deploying distributed databases in Kubernetes clusters, is it worth the effort? Kubernetes opens up many advantages and possibilities, including managing a large number of database services as well as common automated operations, from recoverability, reliability, and scalability to support their lifecycle health. Even in virtualized environments, the time and cost required to deploy a database cluster is much lower than deploying a bare metal cluster.

Stateful Sets provide a way forward for the challenges described in the previous section. With the introduction of Stateful Sets in version 1.5, Kubernetes now implements stateful quality for storage and identity, guaranteeing the following:

Each pod is accompanied by a persistent volume linked from the pod to storage, which solves the storage state problem in A;

Each pod starts in the same order and ends in the opposite order, which solves A's session state problem;

Each pod has a unique and determinable name, address, and sequence index that solves the identity and cluster role problems in A.

C. Deploying stateful sets with Headless services

Note: For this part we will use the kubectl service. How to deploy kubectl services using Rancher can be found here:

https://rancher.com/docs/rancher/v2.x/en/k8s-in-rancher/kubectl/。

Stateful Set Pods require headless services to manage Pods 'network identities. In fact, headless services have undefined cluster IP addresses, which means no cluster IP is defined on the service. Instead, the service definition has selectors, and DNS is configured to return multiple address records or addresses when the service is accessed. At this point, service fqdn maps all IPs to all pod IPs behind the service using the same selector.

Now let's follow this template to create a Headless service for Cassandra:

Use the get svc command to list the properties of the Cassandra service:

Use describe svc to output the attributes of the Cassandra service in verbose format:

D. Create storage categories for persistent volumes

In Rancher, we can use various options to manage persistent storage through the native Kubernetes API resources, PersistentVolume, and PersistentVolumeClaim. The storage categories in Kubernetes tell us which storage categories are supported by our cluster. We can set up dynamic configuration for persistent storage to automatically create volumes and attach them to pods. For example, the following storage class has AWS as its storage provider, the usage type is gp2, and the availability zone is us-west-2a.

If desired, you can also create a new storage class, for example:

When a stateful set is created, PersistentVolumeClaim is initiated for a stateful set pod based on its storage class. With dynamic provisioning, a PersistentVolume can be dynamically provisioned for a pod based on the storage class requested in PersistentVolumeClaim.

You can also manually create persistent volumes through static provisioning. You can read more about static provisioning here:

https://rancher.com/docs/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/。

Note: For static provisioning, it is required to have the same number of persistent volumes as the Cassandra nodes in the Cassandra server.

E. Creating a stateful set

We can now create a stateful set that will provide the attributes we want: orderly deployment and termination, unique network names, and stateful processing. We invoke the following command to start a Cassandra server:

F. Verify that there is a state set

Next, we call the following command to verify that the stateful set is deployed in Cassandra Server:

After creating a stateful set, DESIRED and CURRENT should be equal, call the get pods command to view the ordered list of pods created by the stateful set.

During node creation, you can execute nodetool state to see if Cassandra nodes are started.

G. Expansion and contraction with state set

Duplicate the settings in step F x times and invoke the Zoom command to increase or decrease the size of the stateful set. In the example below, we operate with x=3.

Call get statefulsets to verify that any statesets have been deployed to Cassandra servers.

Call get pods again to see the order of pods created with the state set. Note that Cassandra pods are created sequentially when deployed.

We can perform a nodetool status check after 5 minutes to verify that Cassandra nodes have joined and formed a Cassandra cluster.

Once the node state in nodetool changes to Up/Normal, we can perform a large number of database operations by calling CQL.

H. Calling CQL for database access and operations

When we see that the state is U/N, we can call cqlsh to access the Cassandra container.

I. Using Cassandra as a persistence layer for highly available stateless database services

In the previous exercise, we deployed a Cassandra service in the K8s cluster and provided persistent storage through PersistentVolume. We then use stateful sets to provide stateful attributes for Cassandra clusters and extend the clusters to other nodes. We can now use CQL mode for database access and operations in Cassandra clusters. The advantage of the CQL pattern is that we can easily implement seamless data modeling using natural types and fluent APIs, especially in designing solutions for scaling and time-series data models such as fraud detection. In addition, CQL leverages partitioning and clustering keys to speed up operations in data modeling scenarios.

That's all for the article "How to create a product quality database setup with Rancher." Thanks for reading! I believe that everyone has a certain understanding, hope to share the content to help everyone, if you still want to learn more knowledge, welcome to pay attention to the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report