In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
System solution
I. Environmental needs
1. Network card
Em1
Em2
Em3
Em4
Controller1
172.16.16.1
172.16.17.1
None
None
Controller1
172.16.16.2
172.16.17.2
None
None
Compute1
172.16.16.3
172.16.17.3
None
None
Compute2
172.16.16.4
172.16.17.4
None
None
Compute3
172.16.16.5
172.16.17.5
None
None
……
2. Message queue
For more details on how to deploy mirror-queue mode, please see the rabbtmq cluster deployment document on Zen Tao.
3. Database
Use mariaDB+innodb+gelera, version 10.0.18 or later. For more information on deployment, please see the rabbtmq cluster deployment document on Zen Road.
4. Middleware
Using memcached, not in the form of a cluster, edit / etc/sysconfig/memcached and modify 127.0.0.1 to the local host name (or IP).
II. Deployment plan
This machine uses controller1 as the authentication name
All service passwords use $MODULE+manager, for example: novamanager,glancemanager.
The database uses dftc+$MODULE for example: DB_PASS,DB_PASS.
Plan the IP segment: 172.16.16.0ax 24 as the management network segment, 172.16.17.0 Universe 24 storage segment, and 172.16.18.0 Universe 23 as the external network segment.
Use MYIP= `ip add show em1 | grep inet | head-1 | awk'{print $2}'| awk-F'/''{print $1} '`assignment variable before operation
This article uses the flat+vxlan network deployment method. If you need to change it, please do it yourself.
1 、 database
Mysql-uroot-p*-e "create database keystone;"
Mysql-uroot-p*-e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost'IDENTIFIED BY' DB_PASS';"
Mysql-uroot-p*-e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'IDENTIFIED BY' DB_PASS';"
Mysql-uroot-p*-e "create database glance;"
Mysql-uroot-p*-e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost'IDENTIFIED BY' DB_PASS';"
Mysql-uroot-p*-e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIEDBY' DB_PASS';"
Mysql-uroot-p*-e "create database nova;"
Mysql-uroot-p*-e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost'IDENTIFIED BY' DB_PASS';"
Mysql-uroot-p*-e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY'DB_PASS';"
Mysql-uroot-p*-e "create database nova_api;"
Mysql-uroot-p*-e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost'IDENTIFIED BY' DB_PASS';"
Mysql-uroot-p*-e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIEDBY' DB_PASS';"
Mysql-uroot-p*-e "create database neutron;"
Mysql-uroot-p*-e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost'IDENTIFIED BY' DB_PASS';"
Mysql-uroot-p*-e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIEDBY' DB_PASS';"
Mysql-uroot-p*-e "create database cinder;"
Mysql-uroot-p*-e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost'IDENTIFIED BY' DB_PASS';"
Mysql-uroot-p*-e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIEDBY' DB_PASS';"
Mysql-uroot-p*-e "FLUSH PRIVILEGES;"
2 、 keystone
# install dependency package
Yum installopenstack-keystone httpd mod_wsgi
# modify configuration file
Openstack-config--set / etc/keystone/keystone.conf DEFAULT admin_token 749d6ead6be998642461
Openstack-config--set / etc/keystone/keystone.conf database connectionmysql+pymysql://keystone:DB_PASS@controller1/keystone
Openstack-config--set / etc/keystone/keystone.conf token provider fernet
# synchronize the database and generate fernet
Su-s / bin/sh-c "keystone-manage db_sync" keystone
Keystone-managefernet_setup-- keystone-user keystone--keystone-group keystone
# / etc/httpd/conf/httpd.conf
Touch / etc/httpd/conf.d/wsgi-keystone.conf
Echo ceph.conf
Echo "publicnetwork = 172.16.16.0ax 24" > > ceph.conf
Echo "cluster network = 172.16.17.0 top 24" > > ceph.conf
# # installing ceph Node
# ceph.x86_64 1VR 10.2.5-0.el7 ceph-base.x86_641:10.2.5-0.el7
# ceph-common.x86_64 1VR 10.2.5-0.el7 ceph-mds.x86_641:10.2.5-0.el7
# ceph-mon.x86_64 1VR 10.2.5-0.el7 ceph-osd.x86_64 1VR 10.2.5-0.el7
# ceph-radosgw.x86_64 1VR 10.2.5-0.el7 ceph-selinux.x86_641:10.2.5-0.el7
Ceph-deployinstall controller1 compute1 compute2 compute3
# # initializing ceph-mon
Ceph-deploy moncreate-initial
# errmessage
[compute3] [DEBUG] detect platform information from remote host
[compute3] [DEBUG] detect machine type
[compute3] [DEBUG] find the location of an executable
[compute3] [INFO] Running command: sudo ceph-- cluster=ceph--admin-daemon / var/run/ceph/ceph-mon.compute3.asok mon_status
[ceph_deploy.mon] [WARNIN] mon.compute3 monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon] [WARNIN] waiting 5 seconds before retrying
[compute3] [INFO] Running command: sudo ceph-- cluster=ceph--admin-daemon / var/run/ceph/ceph-mon.compute3.asok mon_status
[ceph_deploy.mon] [WARNIN] mon.compute3 monitor is not yet in quorum, tries left: 4
[ceph_deploy.mon] [WARNIN] waiting 10 seconds before retrying
[compute3] [INFO] Running command: sudo ceph-- cluster=ceph--admin-daemon / var/run/ceph/ceph-mon.compute3.asok mon_status
[ceph_deploy.mon] [WARNIN] mon.compute3 monitor is not yet in quorum, tries left: 3
[ceph_deploy.mon] [WARNIN] waiting 10 seconds before retrying
[compute3] [INFO] Running command: sudo ceph-- cluster=ceph--admin-daemon / var/run/ceph/ceph-mon.compute3.asok mon_status
[ceph_deploy.mon] [WARNIN] mon.compute3 monitor is not yet in quorum, tries left: 2
[ceph_deploy.mon] [WARNIN] waiting 15 seconds before retrying
[compute3] [INFO] Running command: sudo ceph-- cluster=ceph--admin-daemon / var/run/ceph/ceph-mon.compute3.asok mon_status
[ceph_deploy.mon] [WARNIN] mon.compute3 monitor is not yet in quorum, tries left: 1
[ceph_deploy.mon] [WARNIN] waiting 20 seconds before retrying
[ceph_deploy.mon] [ERROR] Some monitors have still not reached quorum:
[ceph_deploy.mon] [ERROR] compute1
[ceph_deploy.mon] [ERROR] compute3
[ceph_deploy.mon] [ERROR] compute2
# resolve
Copy remoteconfigure file to localhost
Compare two file, same file content
So, go aheadnext step
# # initial osd
Ceph-deploy osdprepare compute1:/osd/osd0/ compute2:/osd/osd1 compute3:/osd/osd2
Ceph-deploy osdactivate compute1:/osd/osd0/ compute2:/osd/osd1 compute3:/osd/osd2
Ceph-deploy admincontroller1 compute1 compute2 compute3
Chmod + r/etc/ceph/ceph.client.admin.keyring
#
Ceph authget-or-create client.cinder mon 'allow r' osd 'allowclass-read object_prefixrbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rxpool=p_w_picpaths';ceph authget-or-create client.glance mon' allow r 'osd' allowclass-read object_prefixrbd_children, allow rwx pool=p_w_picpaths';ceph authget-or-create client.cinder-backup mon 'allow r' osd 'allowclass- readobject_prefix rbd_children, allow rwx pool=backups'
#
Ceph authget-or-create client.glance | ssh controller1 sudo tee/etc/ceph/ceph.client.glance.keyring
Ssh controller1sudo chown glance:glance / etc/ceph/ceph.client.glance.keyring
Ceph authget-or-create client.cinder | ssh compute1 sudo tee/etc/ceph/ceph.client.cinder.keyring
Ssh compute1 sudochown cinder:cinder / etc/ceph/ceph.client.cinder.keyring
Ceph authget-or-create client.cinder | ssh compute2 sudo tee/etc/ceph/ceph.client.cinder.keyring
Ssh compute2 sudochown cinder:cinder / etc/ceph/ceph.client.cinder.keyring
Ceph authget-or-create client.cinder | ssh compute3 sudo tee/etc/ceph/ceph.client.cinder.keyring
Ssh compute3 sudochown cinder:cinder / etc/ceph/ceph.client.cinder.keyring
Ceph authget-or-create client.cinder-backup | ssh compute1 sudo tee/etc/ceph/ceph.client.cinder-backup.keyring
Ssh compute1 sudochown cinder:cinder / etc/ceph/ceph.client.cinder-backup.keyring
Ceph authget-or-create client.cinder-backup | ssh compute2 sudo tee/etc/ceph/ceph.client.cinder-backup.keyring
Ssh compute2 sudochown cinder:cinder / etc/ceph/ceph.client.cinder-backup.keyring
Ceph authget-or-create client.cinder-backup | ssh compute3 sudo tee/etc/ceph/ceph.client.cinder-backup.keyring
Ssh compute3 sudochown cinder:cinder / etc/ceph/ceph.client.cinder-backup.keyring
# controller node run belowe command#
Ceph auth get-keyclient.cinder | ssh compute1 tee client.cinder.key
Ceph auth get-keyclient.cinder | ssh compute2 tee client.cinder.key
Ceph auth get-keyclient.cinder | ssh compute3 tee client.cinder.key
# compute'snode dftc user run #
Cat > secret.xml > / etc/my.cnf.d/server.cnf
2. Restart the database
Service mariadb restart
……
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.