Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize Cluster Construction with Ceph

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Editor to share with you how to build a cluster based on Ceph. I hope you will get something after reading this article. Let's discuss it together.

Introduction to background Ceph

Ceph is a distributed storage, which can provide object storage, block storage and file storage, in which object storage and block storage can be well integrated with major cloud platforms. A Ceph cluster has Monitor nodes, MDS nodes (optional for file storage), and at least two OSD daemons.

The Ceph OSD:OSD daemon is used to store data, process data copy, restore, rollback, equalize, and provide partial monitoring information to Monitor through the heartbeat program. At least two OSD daemons are required in a Ceph cluster.

Monitor: maintains the state mapping information of the cluster, including monitor, OSD, Placement Group (PG). State change history information for Monitor, OSD, and PG is also maintained.

MDS: stores metadata for the Ceph file system.

Environmental planning

Four servers: one as Monitor,1 as OSD RGW and two as OSD. Ps: we don't build CephFS.

Ubuntu 16.04 is installed on all servers.

Environment preparation: edit hosts file and hostname

Monitor nodes are defined as node1, two OSD nodes are defined as node2, and node3,RGW nodes are defined as node4.

Open the / etc/hostname file of the Monitor node, change the content to node1, and save and exit. However, the file will not take effect until OS is restarted, so you need to execute the command manually to make it take effect immediately. The command is as follows:

# hostname node1

Then open the / etc/hosts file of each node, and add the corresponding relationship between the ip and the name of the four nodes, similar to the following:

127.0.0.1 localhost127.0.1.1 node1192.168.1.100 node1192.168.1.101 node2192.168.1.102 node3192.168.1.103 node4# The following lines are desirable for IPv6 capable hosts::1 localhost ip6-localhost ip6-loopbackff02::1 ip6-allnodesff02::2 ip6-allrouters to build NTP environment

Use the Monitor server as NTP server and the other 3 as NTP client.

NTP server

To install the NTP service, execute the command: apt-get install ntp. When finished, modify the configuration file / etc/ntp.conf. The Ubuntu time source cannot be accessed because the environment is not connected to the public network. Also, there are no other NTP time sources in this environment, so here we use the Monitor server as the local time source for NTP server. Add the following at the end of the file:

Server 127.127.1.0 # if you have other NTP sources, you can change the ip address here. Fudge 127.127.1.0 stratum 10 and log out the portion of the Ubuntu time source. # pool 0.ubuntu.pool.ntp.org iburst#pool 1.ubuntu.pool.ntp.org iburst#pool 2.ubuntu.pool.ntp.org iburst#pool 3.ubuntu.pool.ntp.org iburst#pool ntp.ubuntu.com

After the modification is complete, save and exit. And restart the ntp service, execute the command: service ntp restart.

Note: when the NTP service is just restarted, it takes a certain amount of time to synchronize the time source. The service cannot be provided immediately, and it will take a certain amount of time to work normally (usually about 5 minutes). You can execute explicit ntpq-p on the NTP server side to view the service status.

NTP client

To install ntpdate, execute the command: apt install ntpdate.

After installation, execute the command ntpdate [- d] {serverIp} to synchronize the time. -d means to turn on debugging information, but not to open it. For example:

# ntpdate 109.105.115.67

When successful, a similar prompt appears: ntpdate [39600]: step time server 109.105.115.67 offset-46797.696033 sec. If it appears: ntpdate [28489]: no server suitable for synchronization found. It may be because server has not been able to provide the service properly, so wait a while and try again.

After the synchronization is successful, the time needs to be written to the hardware clock to prevent the time from returning to its original state after the OS restart. Execute the command hwclock-w.

Install SSH SERVER

Install the SSH server service on all nodes.

# apt-get install openssh-server

Because the Ceph we built uses root users directly, we need to modify the ssh configuration file / etc/ssh/sshd_config, search for the PermitRootLogin option, and change its parameter to yes. Save the exit file and restart the SSH service, and execute the command: service ssh restart.

Use SSH password-free login

Generate SSH keys, do not set passphrase, all input options are directly enter.

# ssh-keygen Generating public/private rsa key pair.Enter file in which to save the key (/ root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in / root/.ssh/id_rsa.Your public key has been saved in / root/.ssh/id_rsa.pub.

Copy this key to all nodes.

# ssh-copy-id node1# ssh-copy-id node2# ssh-copy-id node3# ssh-copy-id node4 set network proxy (optional)

If the entire intranet environment needs to set up a network proxy to use the apt-get installer, you need to add it to the configuration file / etc/environment, such as:

Http_proxy= "http://[proxy-ip]:[proxy-port]"https_proxy=https://[proxy-ip]:[proxy-port]

When the setup is complete, execute the command: export http_proxy= "http://[proxy-ip]:[proxy-port]"; export https_proxy= https://[proxy-ip]:[proxy-port], so that the configuration takes effect immediately.

Note: be sure to configure it in / etc/environment, not in files such as / etc/profile, ~ / .profile, and so on. Because when installing ceph, ssh is used to connect to the remote node and apt-get the installer, but ssh can only recognize the environment variables in / etc/environment, and setting it in other files will cause network access to fail.

Note 2: all nodes need to be set up.

Deploy Ceph Stora

Here, we install ceph-deploy directly on the Monitor node node1, then deploy Monitor on node1 through ceph-deploy, deploy OSD on node2 and node3 nodes, and finally deploy Ceph gateway rgw on node4.

Create a directory on node1 to maintain the configuration information generated by ceph-deploy. The ceph-deploy command generates an output file in the current directory, ensuring that the command is executed in the corresponding directory.

Mkdir my-clustercd my-cluster install ceph-deploy

Update the image repository and install ceph-deploy.

Apt-get update & & sudo apt-get install ceph-deploy starts deploying Ceph again

If you encounter problems during the installation process and want to restart the installation, execute the following command to empty the configuration.

Ceph-deploy purgedata {ceph-node} [{ceph-node}] ceph-deploy forgetkeys

If you also want to empty the Ceph package, execute:

Ceph-deploy purge {ceph-node} [{ceph-node}]

Note: in the actual operation, if you execute ceph-deploy purgedata directly, there will always be an error, indicating that Ceph is still installed on this node and refusing to perform the cleanup operation. So I always execute ceph-deploy purge first, then ceph-deploy purgedata and ceph-deploy forgetkeys.

Deploy Ceph to create cluster ceph-deploy new {initial-monitor-node (s)}

Such as:

Ceph-deploy new node1

Using the ls and cat commands in the current directory to check the ceph-deploy output, you can see a ceph configuration file, a keyring, and a log file created for the new cluster.

Modify osd parameters

Because there are only two OSD in our environment and the number of copies of Ceph mode is 3, we need to modify the configuration file Ceph.conf by adding the following configuration in the [global] section:

Osd pool default size = 2

If the file system type of the OSD storage data partition is not xfs, you need to set some osd variables, otherwise OSD will not start properly and error "ERROR: osd init failed: (36) File name too long". Similarly, in the Ceph.conf file, the [global] section adds the following configuration:

Osd max object name len = 256osd max object namespace len = 64 configure Ceph network parameters

If you have multiple networks in your environment, you need to add the following configuration under the [global] section of Ceph.conf.

Public network = {ip-address} / {netmask}

If there is only one network in the environment, this configuration is not required. For more information about network configuration, please refer to http://docs.ceph.com/docs/master/rados/configuration/network-config-ref/

Install Cephceph-deploy install {ceph-node} [{ceph-node}...]

For example:

Ceph-deploy install node1 node2 node3 node4

After the command is executed, Ceph is installed on each node. Note: if you have executed the ceph-deploy purge command, you will need to reinstall Ceph.

Install Monitor

Install and initialize Monitor and collect keys:

# ceph-deploy mon create-initial

After executing the command, the current directory generates the following keyring:

{cluster-name} .client.admin.keyring {cluster-name}. Bootstrap-osd.keyring {cluster-name}. Bootstrap-mds.keyring {cluster-name}. Bootstrap-rgw.keyring

Create an OSD data directory

OSD's data directories can use separate partitions or only existing partitioned directories. Here we are using the directory directly. If you need to use separate data and log partitions, please refer to: http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-osd/.

Add two OSD.

# ssh node2# sudo mkdir / var/local/osd0# chown ceph:ceph / var/local/osd0# exit# ssh node3# sudo mkdir / var/local/osd1# chown ceph:ceph / var/local/osd1# exit prepare OSDceph-deploy osd prepare {ceph-node}: / path/to/directory

Such as:

# ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1 activate OSDceph-deploy osd prepare {ceph-node}: / path/to/directory

Such as:

# ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1 copy configuration files and manage keyceph-deploy admin {admin-node} {ceph-node}

Such as:

# ceph-deploy admin node1 node2 node3

Ensure that the permissions of ceph.client.admin.keyring are correct and execute on each node:

Chmod + r / etc/ceph/ceph.client.admin.keyring check cluster status # ceph-s

The cluster should return health HEALTH_OK, and all pg are in the state of active+clean, so deployment is no problem at all.

Deploy rgw Gateway

If you want to use Ceph's object store, you need to deploy a rgw gateway. Perform the following steps to create a new instance of rgw:

Ceph-deploy rgw create {gateway-node}

Such as:

# ceph-deploy rgw create node4 Verification Ceph

When the display status is healthy, you can write data and view the data.

Create a plain text file testfile.txt and write data to it.

Create a pool. Format: rados mkpool {pool-name}, execute:

# rados mkpool data

Write the file to pool. Format: rados put {object-name} {file-path}-- pool= {pool-name}, execute:

# rados put test-object-1 testfile.txt-pool=data

If the document is small, it should be finished and successful soon. If the card owner takes a long time, it may be a mistake and need to troubleshoot the problem.

Check whether the file exists in pool in the format: rados-p {pool-name} ls, execute:

# rados-p data ls

Determine the location of the file. Format: ceph osd map {pool-name} {object-name}, execute:

# ceph osd map data test-object-1

Read the file from pool. Format: rados get {object-name}-pool= {pool-name} {file-path}, execute:

# rados get test-object-1-pool=data myfile

To compare whether the read file myfile and the original file testfile.txt are the same, execute the command: diff myfile testfile.txt.

Delete files from pool. Format: rados rm {object-name}-pool= {pool-name}, execute:

# rados rm test-object-1-after reading this article, pool=data, I believe you have a certain understanding of "how to build a cluster in Ceph". If you want to know more about it, please follow the industry information channel. Thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report