In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article is about how to install a Ceph storage cluster in Ubuntu 16.04. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
Ceph is a high-performance, reliable, scalable open source storage platform. It is a free distributed storage system that provides interfaces for object, block and file-level storage and can be run without a single point of failure.
In this tutorial, I will guide you through installing and setting up a Ceph cluster on a Ubuntu 16.04 server. The Ceph cluster includes these components:
Ceph OSD (ceph-osd)-controls data storage, data replication and recovery. The Ceph cluster requires at least two Ceph OSD servers. We will use three Ubuntu 16.04 servers for this installation.
Ceph Monitor (ceph-mon)-monitors cluster status and runs OSD mapping and CRUSH mapping. Here we use a server.
Ceph Meta Data Server (ceph-mds)-you need this if you want to use Ceph as a file system.
prerequisite
6 server nodes with Ubuntu 16.04 installed
Root permissions on all nodes
I will use the following hostname / IP installations:
Hostname IP address ceph-admin 10.0.15.10 mon1 10.0.15.11 osd1 10.0.15.21 osd2 10.0.15.22 osd3 10.0.15.23 client 10.0.15.15
Step 1-configure all nodes
For this installation, I will configure all six nodes to prepare for the installation of the Ceph cluster software. So you must run the following command on all nodes. Then make sure that all nodes have ssh-server installed.
Create a Ceph user
Create a new user named cephuser on all nodes
Useradd-m-s / bin/bash cephuser passwd cephuser
After creating a new user, we need to configure cephuser with sudo permissions without a password. This means that cephuser can get sudo permission to run without entering the password first.
Run the following command to complete the configuration.
Echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee / etc/sudoers.d/cephuser chmod 0440 / etc/sudoers.d/cephuser sed-I s'/Defaults requiretty/#Defaults requiretty'/g / etc/sudoers
Install and configure NTP
Install NTP to synchronize the date and time of all nodes. First run the ntpdate command to set the date through NTP. We will use the NTP server of the US pool. Then turn on and enable the NTP service to start at boot time.
Sudo apt-get install-y ntp ntpdate ntp-doc ntpdate 0.us.pool.ntp.org hwclock-- systohc systemctl enable ntp systemctl start ntp
Install Open-vm-tools
If you are running all the nodes in VMware, you need to install this virtualization tool.
Sudo apt-get install-y open-vm-tools
Install Python and parted
In this tutorial, we need the python package to build a ceph cluster. Install python and python-pip.
Sudo apt-get install-y python python-pip parted
Configure the Hosts file
Use the vim editor to edit the hosts file for all nodes.
Vim / etc/hosts
The paste is configured as follows:
10.0.15.10 ceph-admin 10.0.15.11 mon1 10.0.15.21 ceph-osd1 10.0.15.22 ceph-osd2 10.0.15.23 ceph-osd3 10.0.15.15 ceph-client
Save the hosts file and exit the vim editor.
Now you can try ping the hostname between the two servers to test network connectivity.
Ping-c 5 mon1
Ceph cluster Installation on Ubuntu 16.04
Step 2-configure the SSH server
In this step, we will configure the ceph-admin node. The management node is used to configure the monitoring node and osd node. Log in to the ceph-admin node and use the cephuser user.
Ssh root@ceph-admin su-cephuser
The administrative node is used to install and configure all cluster nodes, so users on the ceph-admin must have permission to connect to all nodes without a password. We need to configure password-less SSH login rights for the cephuser user of the 'ceph-admin' node.
Generate the ssh key for cephuser.
Ssh-keygen
Leave the password empty.
Next, create a configuration file for ssh
Vim / .ssh/config
The paste is configured as follows:
Host ceph-admin Hostname ceph-admin User cephuser Host mon1 Hostname mon1 User cephuser Host ceph-osd1 Hostname ceph-osd1 User cephuser Host ceph-osd2 Hostname ceph-osd2 User cephuser Host ceph-osd3 Hostname ceph-osd3 User cephuser Host ceph-client Hostname ceph-client User cephuser
Save the file and exit vim.
Ceph-admin configuration
Change the profile permissions to 644.
Chmod 644 ~. Ssh/config
Now use the ssh-copy-id command to increase the key to all nodes.
Ssh-keyscan ceph-osd1 ceph-osd2 ceph-osd3 ceph-client mon1 > > ~ / .ssh/known_hosts ssh-copy-id ceph-osd1 ssh-copy-id ceph-osd2 ssh-copy-id ceph-osd3 ssh-copy-id mon1
Enter your cephuser password when you ask for a password.
Ceph-admin deploy ssh key to all cluster nodes
Now try logging in to the osd1 server from the ceph-admin node to test that the unsecret login is working.
Ssh ceph-osd1
SSH Less password from ceph-admin to all nodes cluster
Step 3-configure Ubuntu Firewall
For security reasons, we need to turn on the firewall on the server. We prefer to use Ufw (uncomplicated firewall) to protect the system, which is the default firewall for Ubuntu. In this step, we open ufw on all nodes, and then open the ports that ceph-admin, ceph-mon, and ceph-osd need to use.
Log in to the ceph-admin node and install the ufw package.
Ssh root@ceph-admin sudo apt-get install-y ufw
Open ports 80BEI 2003 and 4505-4506, and then restart the firewall.
Sudo ufw allow 22/tcp sudo ufw allow 80/tcp sudo ufw allow 2003/tcp sudo ufw allow 4505:4506/tcp
Turn on ufw and set to boot.
Sudo ufw enable
UFW Firewall with Ceph service
From the ceph-admin node, log in to the monitoring node mon1 and install ufw.
Ssh mon1 sudo apt-get install-y ufw
Open the port of the ceph monitoring node and then open ufw.
Sudo ufw allow 22/tcp sudo ufw allow 6789/tcp sudo ufw enable
* Open these ports 6800-7300 on each osd node ceph-osd1, ceph-osd2, and ceph-osd3.
Log in from ceph-admin to each ceph-osd node to install ufw.
Ssh ceph-osd1 sudo apt-get install-y ufw
Open the port on the osd node and restart the firewall.
Sudo ufw allow 22/tcp sudo ufw allow 6800:7300/tcp sudo ufw enable
The ufw firewall configuration is complete.
Step 4-configure the Ceph OSD nod
In this tutorial, we have three OSD nodes, each with two hard disk partitions.
/ dev/sda is used for root partition
/ dev/sdb idle partition-20GB
We are going to use / dev/sdb as the ceph disk. From the ceph-admin node, log in to all OSD nodes, and then format the / dev/sdb partition to the XFS file system.
Ssh ceph-osd1 ssh ceph-osd2 ssh ceph-osd3
Use the fdisk command to check the partition table.
Sudo fdisk-l / dev/sdb
Format the / dev/sdb partition to the XFS file system and use the parted command to create a GPT partition table.
Sudo parted-s / dev/sdb mklabel gpt mkpart primary xfs 0% 100%
Next, use the mkfs command to format the partition into XFS format.
Sudo mkfs.xfs-f / dev/sdb
Now check the partition, and then you will see a / dev/sdb partition of the XFS file system.
Sudo fdisk-s / dev/sdb sudo blkid-o value-s TYPE / dev/sdb
Format partition ceph OSD nodes
Step 5-create a Ceph cluster
In this step, we will install Ceph from ceph-admin to all nodes. To get started, log in to the ceph-admin node first.
Ssh root@ceph-admin su-cephuser
Install ceph-deploy on the ceph-admin node
First, we have installed python and python-pip on the system. Now we need to install the Ceph deployment tool 'ceph-deploy' from the pypi python repository.
Use the pip command to install ceph-deploy on the ceph-admin node.
Sudo pip install ceph-deploy
Note: make sure that all nodes have been updated.
After the ceph-deploy tool has been installed, create a new directory for the Ceph cluster configuration.
Create a new cluster
Create a new cluster directory.
Mkdir cluster cd cluster/
Next, create a new cluster by defining the monitoring node mon1 with the ceph-deploy command.
Ceph-deploy new mon1
The command generates the Ceph cluster configuration file ceph.conf in the cluster directory.
Generate new ceph cluster configuration
Edit the ceph.conf with vim.
Vim ceph.conf
Under the [global] block, paste the following configuration.
# Your network address public network = 10.0.15.0 Compact 24 osd pool default size = 2
Save the file and exit the editor.
Install Ceph to all nodes
Now install Ceph from the ceph-admin node to all nodes with one command.
Ceph-deploy install ceph-admin ceph-osd1 ceph-osd2 ceph-osd3 mon1
The command will automatically install Ceph to all nodes: mon1, osd1-3, and ceph-admin-installation will take some time.
Now deploy the monitoring node to the mon1 node.
Ceph-deploy mon create-initial
The command creates a monitoring key and checks the key with the ceph command.
Ceph-deploy gatherkeys mon1
Deploy key ceph
Add OSD to the cluster
With Ceph installed on all nodes, we can now add the OSD daemon to the cluster. The OSD daemon creates data and logs on the disk / dev/sdb partition.
Check the / dev/sdb disk availability of all osd nodes.
Ceph-deploy disk list ceph-osd1 ceph-osd2 ceph-osd3
Disk list of osd nodes
You will see that we created / dev/sdb in XFS format before.
Next, delete the partition table with the zap option on all OSD nodes.
Ceph-deploy disk zap ceph-osd1:/dev/sdb ceph-osd2:/dev/sdb ceph-osd3:/dev/sdb
This command deletes the data on / dev/sdb of all Ceph OSD nodes.
Now prepare all the OSD nodes and make sure the results are correct.
Ceph-deploy osd prepare ceph-osd1:/dev/sdb ceph-osd2:/dev/sdb ceph-osd3:/dev/sdb
When you see that the ceph-osd1-3 result is ready for OSD to use, the command is successful.
Prepare the ceph-osd nodes
Activate OSD with the following command:
Ceph-deploy osd activate ceph-osd1:/dev/sdb ceph-osd2:/dev/sdb ceph-osd3:/dev/sdb
Now you can check the sdb disk of the OSDS node again.
Ceph-deploy disk list ceph-osd1 ceph-osd2 ceph-osd3
Ceph osds activated
The result is that / dev/sdb is now divided into two zones:
/ dev/sdb1-Ceph Data
/ dev/sdb2-Ceph Journal
Or you can check directly at the OSD node mountain.
Ssh ceph-osd1 sudo fdisk-l / dev/sdb
Ceph OSD nodes were created
Next, deploy the management key to all associated nodes.
Ceph-deploy admin ceph-admin mon1 ceph-osd1 ceph-osd2 ceph-osd3
Run the following command on all nodes to change the key file permissions.
Sudo chmod 644 / etc/ceph/ceph.client.admin.keyring
The Ceph cluster has been created on Ubuntu 16.04.
Step 6-Test Ceph
In step 4, we have installed and created a new Ceph cluster, and then added the OSD node to the cluster. Now we should test the cluster to make sure it works as scheduled.
From the ceph-admin node, log in to the Ceph monitoring server mon1.
Ssh mon1
Run the following command to check that the cluster is healthy.
Sudo ceph health
Now check the cluster status.
Sudo ceph-s
You can see the following return result:
Ceph Cluster Status
Make sure that the Ceph health status is OK and that there is a monitoring node with the mon1 IP address of '10.0.15.11'. There are three OSD servers that are all up state and running, and the available disk space is 45GB-3x15GB Ceph data OSD partition.
We successfully set up a new Ceph cluster at Ubuntu 16.04.
Thank you for reading! This is the end of the article on "how to install Ceph storage cluster in Ubuntu 16.04". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.