Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to configure Ceph storage on CentOS 7. 0

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains how to configure Ceph storage on CentOS 7.0. the content of the article is simple and clear, and it is easy to learn and understand. please follow the editor's train of thought to study and learn how to configure Ceph storage on CentOS 7.0. let's study and learn how to configure Ceph storage on CentOS 7.0. let's go deep together.

Ceph is an open source software platform that stores data on a single distributed computer cluster. When you plan to build a cloud, you first need to decide how to implement your storage. Open source Ceph, one of Red Hat's native technologies, is based on an object storage system called RADOS, which uses a set of gateway API to represent data in blocks, files, and object schemas. Due to its own open source features, this portable storage platform can be installed and used on both public and private clouds. The topology of the Ceph cluster is designed according to backup and information distribution, and this inherent design provides data integrity. Its design goal is fault tolerance and can run on commercial hardware and some more advanced systems through correct configuration.

Ceph can be installed on any Linux distribution, but in order to run correctly, it needs the latest kernel and other libraries. In this guide, we will use minimized installation of CentOS 7. 0.

System resources

* * CEPH-STORAGE** OS: CentOS Linux 7 (Core) RAM:1 GBCPU:1 CPUDISK: 20Network: 45.79.136.163FQDN: ceph-storage.linoxide.com * * CEPH-NODE**OS: CentOS Linux 7 (Core) RAM:1 GBCPU:1 CPUDISK: 20Network: 45.79.171.138FQDN: ceph-node.linoxide.com

Pre-installation configuration

Before installing Ceph storage, we need to complete some steps on each node. The only thing to do is to make sure that the network of each node is configured and can access each other.

Configure Hosts

To configure hosts entries on each node, open the default hosts configuration file (LCTT translation: or do the corresponding DNS parsing) as follows.

# vi / etc/hosts45.79.136.163 ceph-storage ceph-storage.linoxide.com 45.79.171.138 ceph-node ceph-node.linoxide.com

Install VMware tools

When the work environment is a VMWare virtual environment, it is recommended that you install its open VM tool. You can install it using the following command.

# yum install-y open-vm-tools

Configure the firewall

If you are using a firewall-enabled restricted environment, make sure that the following ports are open in your Ceph storage management node and client node.

You must open ports 80, 2003, and 4505-4506 on your Admin Calamari node, and allow access to the Ceph or Calamari management node through port 80 so that clients in your network can access the Calamari web user interface.

You can use the following command to start and enable the firewall in CentOS 7.

# systemctl start firewalld # systemctl enable firewalld

Run the following command to make the Admin Calamari node open the ports mentioned above.

# firewall-cmd-zone=public-add-port=80/tcp-permanent # firewall-cmd-zone=public-add-port=2003/tcp-permanent # firewall-cmd-zone=public-add-port=4505-4506/tcp-permanent # firewall-cmd-reload

In the Ceph Monitor node, you need to allow the following ports in the firewall.

# firewall-cmd-zone=public-add-port=6789/tcp-permanent

Then allow the following default port list so that you can interact with clients and monitoring nodes and send data to other OSD.

# firewall-cmd-zone=public-add-port=6800-7300/tcp-permanent

If you are working in a non-production environment, it is recommended that you disable the firewall and SELinux settings. We will disable the firewall and SELinux in our test environment.

# systemctl stop firewalld # systemctl disable firewalld

System upgrade

Now upgrade your system and restart to make the required changes take effect.

# yum update # shutdown-r 0

Set up Ceph user

Now we will create a separate sudo user to install the ceph-deploy tool on each node and allow that user to access each node without a password, because it requires software and configuration files to be installed on the Ceph node without being prompted for a password.

Run the following command for a new user with a new stand-alone home directory on the ceph-storage host.

[root@ceph-storage ~] # useradd-d / home/ceph-m ceph [root@ceph-storage ~] # passwd ceph

Each new user in the node must have sudo permission, and you can use the command shown below to grant sudo permission.

[root@ceph-storage ~] # echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee / etc/sudoers.d/ceph ceph ALL = (root) NOPASSWD:ALL ceph ALL = (root) NOPASSWD:ALL

Set SSH key

Now we will generate the ssh key on the Ceph management node and copy the key to each Ceph cluster node.

Run the following command on ceph-node to copy its ssh key to ceph-storage.

[root@ceph-node ~] # ssh-keygen Generating public/private rsa key pair.Enter file in which to save the key (/ root/.ssh/id_rsa): Created directory'/ root/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again:Your identification has been saved in / root/.ssh/id_rsa.Your public key has been saved in / root/.ssh/id_rsa.pub.The key fingerprint is:5b:*: *: *: C9 root@ceph-nodeThe key's randomart image is:+-- [RSA 2048]-+ [root@ceph-node ~] # ssh-copy-id ceph@ceph-storage

SSH key

Configure number of PID

To configure the value of the number of PID, we will check the default kernel value using the following command. By default, it is a small number of 32768 threads.

Configure this value to a larger number by editing the system configuration file, as shown in the following figure.

Change the PID value

Configure the management node server

After configuring and verifying all the networks, we now install ceph-deploy using the ceph user. Check the hosts entry by opening the file (LCTT translation: you can also use DNS parsing to do this).

# vim / etc/hosts ceph-storage 45.79.136.163ceph-node 45.79.171.138

Run the following command to add its library.

# rpm-Uhv http://ceph.com/rpm-giant/el7/noarch/ceph-release-1-0.el7.noarch.rpm

Add Ceph warehouse

Or create a new file and update the Ceph library parameters, and don't forget to replace your current Release and version number.

[root@ceph-storage ~] # vi / etc/yum.repos.d/ceph.repo [ceph-noarch] name=Ceph noarch packagesbaseurl= http://ceph.com/rpm-{ceph-release}/{distro}/noarchenabled=1gpgcheck=1type=rpm-mdgpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

Then update your system and install the ceph-deploy package.

Install the ceph-deploy package

We run the following command and the ceph-deploy installation command to update the system and the ceph library and other software packages.

# yum update-y & & yum install ceph-deploy-y

Configure the cluster

Use the following command to create a new directory on the ceph management node and enter the new directory to collect all output files and logs.

# mkdir ~ / ceph-cluster# cd ~ / ceph-cluster# ceph-deploy new storage

Set up a ceph cluster

If you successfully execute the above command, you will see that it has created a new configuration file.

Now configure Ceph's default configuration file, open it with any editor and add the following two lines under the global parameters that will affect your public network.

# vim ceph.conf osd pool default size = 1public network = 45.79.0.0Mab 16

Install Ceph

Now we are ready to install Ceph on each node associated with the Ceph cluster. We use the following command to install Ceph on ceph-storage and ceph-node.

# ceph-deploy install ceph-node ceph-storage

Install ceph

It will take some time to process all the required repositories and install the required packages.

When the ceph installation process is complete on both nodes, the next step is to create the monitor and collect the keys by running the following command on the same node.

# ceph-deploy mon create-initial

Ceph initializes the monitor

Set up OSD and OSD daemons

Now that we will set up disk storage, first run the following command to list all your available disks.

# ceph-deploy disk list ceph-storage

The results will list the disks used in your storage node, and you will use them to create an OSD. Let's run the following command, please use your disk name.

# ceph-deploy disk zap storage:sda # ceph-deploy disk zap storage:sdb

To * complete the OSD configuration, run the following command to configure the log disk and the data disk.

# ceph-deploy osd prepare storage:sdb:/dev/sda # ceph-deploy osd activate storage:/dev/sdb1:/dev/sda1

You need to run the same command on all nodes, and it will erase everything on your disk. Then in order for the cluster to work, we need to copy different keys and configuration files from the ceph management node to all relevant nodes using the following command.

# ceph-deploy admin ceph-node ceph-storage

Test Ceph

We are almost done with the Ceph cluster setup, so let's run the following command on the ceph management node to check the running ceph status.

# ceph status # ceph healthHEALTH_OK

If you don't see any error messages in ceph status, it means that you have successfully installed the ceph storage cluster on CentOS 7.

Thank you for your reading, the above is the content of "how to configure Ceph storage on CentOS 7.0". After the study of this article, I believe you have a deeper understanding of how to configure Ceph storage on CentOS 7.0. the specific use also needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report