In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Cluster: HPC/LB/HA
LB:nginx / lvs / haproxy / F5
HA:keepalived / RHCS
LVS:Linux virtual server
LVS working mode: NAT / TUN / DR
Storage:
1. Block storage, such as iSCSI, FC SAN
2. File storage, such as NFS,CIFS
3. Object storage
Ceph:
1. It is a distributed file system, which can provide block storage, file system storage and object storage. However, file storage is not very mature and is not recommended in a production environment. The most common form of application is block storage.
2. Main components of Ceph
OSD: object storage device, which is the only real data storage component in ceph. Typically, an OSD process is bound to a physical disk.
MON:Monitor monitor, which tracks the health status of the entire cluster. It maintains a mapping table for each ceph component. The number of MON processes is odd, such as 3, 5, 7...
MDS: metadata server. Provides metadata for ceph file system storage, which is not required if it is not file system storage.
Metadata: data that describes the data. For example, the publisher, the number of pages, the author and the time of publication of a book are all metadata.
RADOS: reliable and autonomous split object storage. RADOS ensures that all kinds of data in CEPH are in the form of objects and manipulates the consistency of CEPH.
RBD: provides a block storage interface for clients
RADOS GW: provides object storage interface for clients
CEPH FS: provides a file system storage interface for clients
CEPH environment building
1. Create 5 virtual machines
Node1.tedu.cn 192.168.4.1
Node2.tedu.cn 192.168.4.2
Node3.tedu.cn 192.168.4.3
Node4.tedu.cn 192.168.4.4
Client.tedu.cn 192.168.4.10
2. Start the virtual machine
[root@room8pc16 kvms_ansi] # for vm in rh7_node {1..5}
Do
Virsh start $vm
Done
3. Configure the YUM source of CEPH on the physical host
[root@room8pc16 cluster] # mkdir / var/ftp/ceph/
[root@room8pc16 cluster] # tail-1 / etc/fstab
/ ISO/rhcs2.0-rhosp9-20161113-x86_64.iso / var/ftp/ceph iso9660 defaults 0 0
[root@room8pc16 cluster] # mount-a
[root@room8pc16 ~] # vim server.repo
[rhel7.4]
Name=rhel7.4
Baseurl= ftp://192.168.4.254/rhel7.4
Enabled=1
Gpgcheck=0
[mon]
Name=mon
Baseurl= ftp://192.168.4.254/ceph/rhceph-2.0-rhel-7-x86_64/MON
Enabled=1
Gpgcheck=0
[osd]
Name=osd
Baseurl= ftp://192.168.4.254/ceph/rhceph-2.0-rhel-7-x86_64/OSD
Enabled=1
Gpgcheck=0
[tools]
Name=tools
Baseurl= ftp://192.168.4.254/ceph/rhceph-2.0-rhel-7-x86_64/Tools
Enabled=1
Gpgcheck=0
4. There are many nodes in the CEPH cluster, and it is inefficient to manage them one by one, and errors may occur, so we can find a host as a management node, and it can manage all hosts uniformly. We keep node1 as the management node.
5. In order to facilitate the management of the management node, create a secret-free login first
(1) access each host by name and configure name resolution
[root@node1 ~] # for i in {1.. 4}
Do
Echo-e "192.168.4.Secreti\ tnode$i.tedu.cn\ tnode$i" > > / etc/hosts
Done
[root@node1 ~] # echo-e "192.168.4.10\ tclient.tedu.cn\ tclient" > > / etc/hosts
(2) generate key pairs and generate non-interactive mode
[root@node1 ~] # ssh-keygen-f / root/.ssh/id_rsa-N ""
(3) the first ssh to the remote host will be questioned (yes/no). You can first save the identity information of the remote host to the local
[root@node1 ~] # ssh-keyscan 192.168.4. {1.. 4} > > / root/.ssh/known_hosts
[root@node1 ~] # ssh-keyscan 192.168.4.10 > > / root/.ssh/known_hosts
[root@node1 ~] # ssh-keyscan node {1.. 4} > > / root/.ssh/known_hosts
[root@node1 ~] # ssh-keyscan client > > / root/.ssh/known_hosts
(4) copy the key to the remote host
[root@node1 ~] # for ip in 192.168.4. {1.. 4}
Do
Ssh-copy-id-I $ip
Done
[root@node1] # ssh-copy-id-I 192.168.4.10
(5) copy the hosts file to each host
[root@node1 ~] # for host in node {2.. 4}
Do
Scp / etc/hosts $host:/etc/
Done
[root@node1 ~] # scp / etc/hosts client:/etc
6. Use client as the NTP server
NTP: network time protocol, udp123 port. Used to synchronize time.
Precise timing: atomic clock. Global time is not the same, because the earth is round, so the earth according to longitude, every 15 degrees into a time zone, a total of 24:00 zone. East eighth District time is adopted in China.
(1) install the software package on client
[root@client ~] # yum install-y chrony
(2) modify the configuration
[root@client ~] # vim / etc/chrony.conf
Allow 192.168.4.0/24
Local stratum 10
(3) start the service
[root@client ~] # systemctl restart chronyd; systemctl enable chronyd
(4) use other hosts as clients
[root@node1 ~] # vim / etc/chrony.conf
Server 192.168.4.10 iburst # deletion at the beginning of the other 3 lines server
[root@node1] # for ip in 192.168.4. {2.. 4}
Do
Scp / etc/chrony.conf $ip:/etc/
Done
[root@node1 ~] # for ip in 192.168.4. {1.. 4}
Do
Ssh $ip systemctl restart chronyd
Done
(5) testing
[root@node1] # date-s "2018-06-20 12:00:00"
[root@node1] # ntpdate 192.168.4.10 and 192.168.4.10 synchronize the clock
[root@node1 ~] # date time is synchronized
7. Add 3 hard drives each to the node1~node3
8. Install ceph deployment tools on the node1 node
[root@node1 ~] # yum install-y ceph-deploy
9. Create a working directory for ceph on the node1 node with a custom directory name
[root@node1 ~] # mkdir ceph_conf
[root@node1 ~] # cd ceph_conf
10. Generate the configuration files necessary for ceph installation
[root@node1 ceph_conf] # ceph-deploy new node1 node2 node3
[root@node1 ceph_conf] # ls
11. Install ceph cluster
[root@node1 ceph_conf] # ceph-deploy install node1 node2 node3
12. Initialize MON services for all nodes
[root@node1 ceph_conf] # ceph-deploy mon create-initial
Configure CEPH Cluster
1. Partition the vdb on the node1~node3 node and use the partition of vdb for the log
[root@node1 ceph_conf] # for host in node {1..3}
Do
Ssh $host parted / dev/vdb mklabel gpt
Done
[root@node1 ceph_conf] # for host in node {1.. 3}; do ssh $host parted / dev/vdb mkpart primary 1024kB 50%; done
[root@node1 ceph_conf] # for host in node {1.. 3}; do ssh $host parted / dev/vdb mkpart primary 50% 100%; done
[root@node1 ceph_conf] # for host in node {1.. 3}; do ssh $host lsblk; done
[root@node1 ceph_conf] # for host in node {1.. 3}; after the do ssh $host chown ceph.ceph / dev/vdb?; done # system is rebooted, the master group changes back to root.disk.
Configure udev so that the disk belongs to the master group and is still ceph after reboot
[root@node3 ~] # vim / etc/udev/rules.d/90-mydisk.rules
ACTION== "add", KERNEL== "vdb [12]", OWNER= "ceph", GROUP= "ceph"
2. Create an OSD disk and execute it on node1
(1) initialize the disk
[root@node1 ceph_conf] # for host in node {1..3}
Do
Ceph-deploy disk zap $host:vdc $host:vdd
Done
(2) create an OSD and specify the log area of the data as vdb
[root@node1 ceph_conf] # for host in node {1..3}
Do
Ceph-deploy osd create $host:vdc:/dev/vdb1 $host:vdd:/dev/vdb2
Done
If there is an error prompt for run 'gatherkeys', execute the following command
[root@node1 ceph_conf] # ceph-deploy gatherkeys node1 node2 node3
(3) View status
[root@node1 ceph_conf] # ceph-s will display HEATH_OK if normal
If the status is HEALTH_ERR, restart the service, as follows:
[root@node1 ceph_conf] # for host in node {1.. 3}; do ssh $host systemctl restart ceph*.service ceph*.target; done
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.