Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The method of installing and configuring distributed system Ceph under CentOS

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains "the method of installing and configuring distributed system Ceph under CentOS". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "the method of installing and configuring distributed system Ceph under CentOS".

Developing a distributed file system requires many efforts, but if the problem can be solved accurately, it is priceless. The goal of Ceph is simply defined as: easily scalable to several PB capacity, high performance for a variety of workloads (input / output operations per second [IOPS] and bandwidth).

Unfortunately, these goals compete with each other (for example, scalability degrades or suppresses performance or affects reliability). Ceph has developed some very interesting concepts (for example, dynamic metadata partitioning, data distribution, and replication) that are only briefly discussed in this article. Ceph's design also includes fault tolerance to protect against a single point of failure, which assumes that large-scale (PB-level storage) storage failures are a common phenomenon rather than an exception. Finally, it is not designed to assume a particular workload, but includes the ability to adapt to changing workloads and provide optimal performance. It accomplishes all these tasks with POSIX compatibility, allowing it to transparently deploy applications that currently rely on POSIX semantics (through improvements aimed at Ceph). Finally, Ceph is open source distributed storage and part of the mainline Linux kernel (2.6.34).

Now, let's explore the architecture of Ceph and the high-end core elements. Then I'll take it to another level to explain some of the key aspects of Ceph and provide a more detailed discussion.

The Ceph ecosystem can be roughly divided into four parts (see figure 1): clients (data users), metadata servers (caching and synchronizing distributed metadata), an object storage cluster (storing data and metadata as objects and performing other key functions), and finally a cluster monitor (performing monitoring functions).

As shown in the figure, customers use a metadata server to perform metadata operations (to determine the location of the data). The metadata server manages the location of the data and where to store the new data. It is worth noting that the metadata is stored in a storage cluster (labeled "metadata Ibig O"). The actual file Imax O occurs between the customer and the object storage cluster. As a result, higher-level POSIX functions (for example, open, close, rename) are managed by the metadata server, while POSIX functions (such as read and write) are managed directly by the object storage cluster.

Another architectural view is provided in figure 2. A series of servers access the Ceph ecosystem through a client interface, which understands the relationship between metadata servers and object-level storage. Distributed storage systems can be viewed in several layers, including the format of a storage device (Extent and B-tree-based Object File System [EBOFS] or an alternative), and an overlay layer designed to manage data replication, failure detection, recovery, and subsequent data migration, called Reliable Autonomic Distributed Object Storage (RADOS). Finally, the monitor is used to identify component failures, including subsequent notifications.

Sample system resources

* * CEPH-STORAGE**

OS: CentOS Linux 7 (Core)

RAM:1 GB

CPU:1 CPU

DISK: 20

Network: 45.79.136.163

FQDN: ceph-storage.linoxide.com

* * CEPH-NODE**

OS: CentOS Linux 7 (Core)

RAM:1 GB

CPU:1 CPU

DISK: 20

Network: 45.79.171.138

FQDN: ceph-node.linoxide.com

Pre-installation configuration

Before installing Ceph storage, we need to complete some steps on each node. The first thing is to make sure that the network of each node is configured and can access each other.

Configure Hosts

To configure hosts entries on each node, open the default hosts configuration file (LCTT translation: or do the corresponding DNS parsing) as follows.

The code is as follows:

# vi / etc/hosts

45.79.136.163 ceph-storage ceph-storage.linoxide.com

45.79.171.138 ceph-node ceph-node.linoxide.com

Install VMware tools

When the work environment is a VMWare virtual environment, it is recommended that you install its open VM tool. You can install it using the following command.

The code is as follows:

# yum install-y open-vm-tools

Configure the firewall

If you are using a firewall-enabled restricted environment, make sure that the following ports are open in your Ceph storage management node and client node.

You must open ports 80, 2003, and 4505-4506 on your Admin Calamari node, and allow access to the Ceph or Calamari management node through port 80 so that clients in your network can access the Calamari web user interface.

You can use the following command to start and enable the firewall in CentOS 7.

The code is as follows:

# systemctl start firewalld

# systemctl enable firewalld

Run the following command to make the Admin Calamari node open the ports mentioned above.

The code is as follows:

# firewall-cmd-zone=public-add-port=80/tcp-permanent

# firewall-cmd-zone=public-add-port=2003/tcp-permanent

# firewall-cmd-zone=public-add-port=4505-4506/tcp-permanent

# firewall-cmd-reload

In the Ceph Monitor node, you need to allow the following ports in the firewall.

The code is as follows:

# firewall-cmd-zone=public-add-port=6789/tcp-permanent

Then allow the following default port list so that you can interact with clients and monitoring nodes and send data to other OSD.

The code is as follows:

# firewall-cmd-zone=public-add-port=6800-7300/tcp-permanent

If you are working in a non-production environment, it is recommended that you disable the firewall and SELinux settings. We will disable the firewall and SELinux in our test environment.

The code is as follows:

# systemctl stop firewalld

# systemctl disable firewalld

System upgrade

Now upgrade your system and restart to make the required changes take effect.

The code is as follows:

# yum update

# shutdown-r 0

Set up Ceph user

Now we will create a separate sudo user to install the ceph-deploy tool on each node and allow that user to access each node without a password, because it requires software and configuration files to be installed on the Ceph node without being prompted for a password.

Run the following command for a new user with a new stand-alone home directory on the ceph-storage host.

The code is as follows:

[root@ceph-storage] # useradd-d / home/ceph-m ceph

[root@ceph-storage ~] # passwd ceph

Each new user in the node must have sudo permission, and you can use the command shown below to grant sudo permission.

The code is as follows:

[root@ceph-storage ~] # echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee / etc/sudoers.d/ceph

Ceph ALL = (root) NOPASSWD:ALL

[root@ceph-storage ~] # sudo chmod 0440 / etc/sudoers.d/ceph

Set SSH key

Now we will generate the ssh key on the Ceph management node and copy the key to each Ceph cluster node.

Run the following command on ceph-node to copy its ssh key to ceph-storage.

The code is as follows:

[root@ceph-node ~] # ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/ root/.ssh/id_rsa):

Created directory'/ root/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in / root/.ssh/id_rsa.

Your public key has been saved in / root/.ssh/id_rsa.pub.

The key fingerprint is:

5b:*:c9 root@ceph-node

The key's randomart image is:

+-[RSA 2048]-+

The code is as follows:

[root@ceph-node ~] # ssh-copy-id ceph@ceph-storage

Configure number of PID

To configure the value of the number of PID, we will check the default kernel value using the following command. By default, it is a small maximum of 32768 threads.

Configure this value to a larger number by editing the system configuration file, as shown in the following figure.

Configure the management node server

After configuring and verifying all the networks, we now install ceph-deploy using the ceph user. Check the hosts entry by opening the file (LCTT translation: you can also use DNS parsing to do this).

The code is as follows:

# vim / etc/hosts

Ceph-storage 45.79.136.163

Ceph-node 45.79.171.138

Run the following command to add its library.

The code is as follows:

# rpm-Uhv http://ceph.com/rpm-giant/el7/noarch/ceph-release-1-0.el7.noarch.rpm

Or create a new file and update the Ceph library parameters, and don't forget to replace your current Release and version number.

The code is as follows:

[root@ceph-storage ~] # vi / etc/yum.repos.d/ceph.repo

[ceph-noarch]

Name=Ceph noarch packages

Baseurl= http://ceph.com/rpm-{ceph-release}/{distro}/noarch

Enabled=1

Gpgcheck=1

Type=rpm-md

Gpgkey= https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

Then update your system and install the ceph-deploy package.

Install the ceph-deploy package

We run the following command as well as the ceph-deploy installation command to update the system and the latest ceph libraries and other packages.

The code is as follows:

# yum update-y & & yum install ceph-deploy-y

Configure the cluster

Use the following command to create a new directory on the ceph management node and enter the new directory to collect all output files and logs.

The code is as follows:

# mkdir ~ / ceph-cluster

# cd ~ / ceph-cluster

# ceph-deploy new storage

If you successfully execute the above command, you will see that it has created a new configuration file.

Now configure Ceph's default configuration file, open it with any editor and add the following two lines under the global parameters that will affect your public network.

The code is as follows:

# vim ceph.conf

Osd pool default size = 1

Public network = 45.79.0.0amp 16

Install Ceph

Now we are ready to install Ceph on each node associated with the Ceph cluster. We use the following command to install Ceph on ceph-storage and ceph-node.

The code is as follows:

# ceph-deploy install ceph-node ceph-storage

It will take some time to process all the required repositories and install the required packages.

When the ceph installation process is complete on both nodes, the next step is to create the monitor and collect the keys by running the following command on the same node.

The code is as follows:

# ceph-deploy mon create-initial

Set up OSD and OSD daemons

Now that we will set up disk storage, first run the following command to list all your available disks.

The code is as follows:

# ceph-deploy disk list ceph-storage

The results will list the disks used in your storage node, and you will use them to create an OSD. Let's run the following command, please use your disk name.

The code is as follows:

# ceph-deploy disk zap storage:sda

# ceph-deploy disk zap storage:sdb

To finalize the OSD configuration, run the following command to configure the log disk and the data disk.

The code is as follows:

# ceph-deploy osd prepare storage:sdb:/dev/sda

# ceph-deploy osd activate storage:/dev/sdb1:/dev/sda1

You need to run the same command on all nodes, and it will erase everything on your disk. Then in order for the cluster to work, we need to copy different keys and configuration files from the ceph management node to all relevant nodes using the following command.

The code is as follows:

# ceph-deploy admin ceph-node ceph-storage

Test Ceph

We are almost done with the Ceph cluster setup, so let's run the following command on the ceph management node to check the running ceph status.

The code is as follows:

# ceph status

# ceph health

HEALTH_OK

If you don't see any error messages in ceph status, it means that you have successfully installed the ceph storage cluster on CentOS 7.

Thank you for your reading, the above is the content of "the method of installing and configuring distributed system Ceph under CentOS". After the study of this article, I believe you have a deeper understanding of the method of installing and configuring distributed system Ceph under CentOS. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report