Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to configure Ceph Storage on CentOS 7. 0

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

It is believed that many inexperienced people have no idea about how to configure Ceph storage on CentOS 7.0. therefore, this paper summarizes the causes and solutions of the problem. I hope you can solve this problem through this article.

Ceph is a distributed file system that can maintain POSIX compatibility while adding replication and fault tolerance.

System resources * * CEPH-STORAGE**OS: CentOS Linux 7 (Core) RAM:1 GBCPU:1 CPUDISK: 20Network: 45.79.136.163FQDN: ceph-storage.linoxide.com**CEPH-NODE**OS: CentOS Linux 7 (Core) RAM:1 GBCPU:1 CPUDISK: 20Network: 45.79.171.138FQDN: ceph-node.linoxide.com pre-installation configuration

Before installing Ceph storage, we need to complete some steps on each node. The first thing is to make sure that the network of each node is configured and can access each other.

Configure Hosts

To configure hosts entries on each node, open the default hosts configuration file (LCTT translation: or do the corresponding DNS parsing) as follows.

# vi / etc/hosts45.79.136.163 ceph-storage ceph-storage.linoxide.com45.79.171.138 ceph-node ceph-node.linoxide.com

Install VMware tools

When the work environment is a VMWare virtual environment, it is recommended that you install its open VM tool. You can install it using the following command.

# yum install-y open-vm-tools

Configure the firewall

If you are using a firewall-enabled restricted environment, make sure that the following ports are open in your Ceph storage management node and client node.

You must open ports 80, 2003, and 4505-4506 on your Admin Calamari node, and allow access to the Ceph or Calamari management node through port 80 so that clients in your network can access the Calamari web user interface.

You can use the following command to start and enable the firewall in CentOS 7.

# systemctl start firewalld# systemctl enable firewalld

Run the following command to make the Admin Calamari node open the ports mentioned above.

# firewall-cmd-zone=public-add-port=80/tcp-permanent# firewall-cmd-zone=public-add-port=2003/tcp-permanent# firewall-cmd-zone=public-add-port=4505-4506/tcp-permanent# firewall-cmd-reload

In the Ceph Monitor node, you need to allow the following ports in the firewall.

# firewall-cmd-zone=public-add-port=6789/tcp-permanent

Then allow the following default port list so that you can interact with clients and monitoring nodes and send data to other OSD.

# firewall-cmd-zone=public-add-port=6800-7300/tcp-permanent

If you are working in a non-production environment, it is recommended that you disable the firewall and SELinux settings. We will disable the firewall and SELinux in our test environment.

# systemctl stop firewalld# systemctl disable firewalld

System upgrade

Now upgrade your system and restart to make the required changes take effect.

# yum update# shutdown-r 0 set up Ceph users

Now we will create a separate sudo user to install the ceph-deploy tool on each node and allow that user to access each node without a password, because it requires software and configuration files to be installed on the Ceph node without being prompted for a password.

Run the following command for a new user with a new stand-alone home directory on the ceph-storage host.

[root@ceph-storage ~] # useradd-d / home/ceph-m ceph [root@ceph-storage ~] # passwd ceph

Each new user in the node must have sudo permission, and you can use the command shown below to grant sudo permission.

[root@ceph-storage ~] # echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee / etc/sudoers.d/cephceph ALL = (root) NOPASSWD:ALL [root@ceph-storage ~] # sudo chmod 0440 / etc/sudoers.d/ceph set SSH key

Now we will generate the ssh key on the Ceph management node and copy the key to each Ceph cluster node.

Run the following command on ceph-node to copy its ssh key to ceph-storage.

[root@ceph-node ~] # ssh-keygenGenerating public/private rsa key pair.Enter file in which to save the key (/ root/.ssh/id_rsa): Created directory'/ root/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again:Your identification has been saved in / root/.ssh/id_rsa.Your public key has been saved in / root/.ssh/id_rsa.pub.The key fingerprint is:5b:*:* : *: C9 root@ceph-nodeThe key's randomart image is:+-- [RSA 2048]-+ [root@ceph-node ~] # ssh-copy-id ceph@ceph-storage

SSH key

Configure number of PID

To configure the value of the number of PID, we will check the default kernel value using the following command. By default, it is a small maximum of 32768 threads.

Configure this value to a larger number by editing the system configuration file, as shown in the following figure.

Change the PID value

Configure the management node server

After configuring and verifying all the networks, we now install ceph-deploy using the ceph user. Check the hosts entry by opening the file (LCTT translation: you can also use DNS parsing to do this).

# vim / etc/hostsceph-storage 45.79.136.163ceph-node 45.79.171.138

Run the following command to add its library.

# rpm-Uhv http://ceph.com/rpm-giant/el7/noarch/ceph-release-1-0.el7.noarch.rpm

Add Ceph warehouse

Or create a new file and update the Ceph library parameters, and don't forget to replace your current Release and version number.

[root@ceph-storage ~] # vi / etc/yum.repos.d/ceph.repo [ceph-noarch] name=Ceph noarch packagesbaseurl= http://ceph.com/rpm-{ceph-release}/{distro}/noarchenabled=1gpgcheck=1type=rpm-mdgpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

Then update your system and install the ceph-deploy package.

Install the ceph-deploy package

We run the following command as well as the ceph-deploy installation command to update the system and the latest ceph libraries and other packages.

# yum update-y & & yum install ceph-deploy-y configure cluster

Use the following command to create a new directory on the ceph management node and enter the new directory to collect all output files and logs.

# mkdir ~ / ceph-cluster# cd ~ / ceph-cluster# ceph-deploy new storage

Set up a ceph cluster

If you successfully execute the above command, you will see that it has created a new configuration file.

Now configure Ceph's default configuration file, open it with any editor and add the following two lines under the global parameters that will affect your public network.

# vim ceph.confosd pool default size = 1public network = 45.79.0.0Ma16 install Ceph

Now we are ready to install Ceph on each node associated with the Ceph cluster. We use the following command to install Ceph on ceph-storage and ceph-node.

# ceph-deploy install ceph-node ceph-storage

Install ceph

It will take some time to process all the required repositories and install the required packages.

When the ceph installation process is complete on both nodes, the next step is to create the monitor and collect the keys by running the following command on the same node.

# ceph-deploy mon create-initial

Ceph initializes the monitor

Set up OSD and OSD daemons

Now that we will set up disk storage, first run the following command to list all your available disks.

# ceph-deploy disk list ceph-storage

The results will list the disks used in your storage node, and you will use them to create an OSD. Let's run the following command, please use your disk name.

# ceph-deploy disk zap storage:sda# ceph-deploy disk zap storage:sdb

To finalize the OSD configuration, run the following command to configure the log disk and the data disk.

# ceph-deploy osd prepare storage:sdb:/dev/sda# ceph-deploy osd activate storage:/dev/sdb1:/dev/sda1

You need to run the same command on all nodes, and it will erase everything on your disk. Then in order for the cluster to work, we need to copy different keys and configuration files from the ceph management node to all relevant nodes using the following command.

# ceph-deploy admin ceph-node ceph-storage Test Ceph

We are almost done with the Ceph cluster setup, so let's run the following command on the ceph management node to check the running ceph status.

# ceph status# ceph healthHEALTH_OK

If you don't see any error messages in ceph status, it means that you have successfully installed the ceph storage cluster on CentOS 7.

What is Linux system Linux is a free-to-use and free-spread UNIX-like operating system, is a POSIX-based multi-user, multi-task, multi-threaded and multi-CPU operating system, using Linux can run major Unix tools, applications and network protocols.

After reading the above, have you mastered how to configure Ceph storage on CentOS 7. 0? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report