In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Based on OpenStack M version of each component high availability solution to explore what is, for this problem, this article details the corresponding analysis and solutions, hoping to help more want to solve this problem of small partners to find a simpler and easier way.
1 Introduction
This test is mainly aimed at each component of Openstack, exploring the deployment architecture to achieve high availability, and preventing cloud platform services and virtual machine instances from being unavailable after a physical node goes down.
The main components of Openstack and the parts that need to be considered for HA implementation are as follows:
1: keystone authentication module
1) keystone-api
2: glance mirror module
1) glance-api
2) glance-registry
3) glance backend storage
3: nova computing module
1) nova-api
2) nova-novncproxy
3) instance
4;Cinder Block Storage Module
1) cinder-api
2) cinder-volume
5: neutron network module
1) neutron-server
2) l3 router
6: swift object storage module
1) proxy-server
7: horizon foreground dashboard interface
8: Background mariaDB database
9: rabbitmq message queue middleware
10: memcache cache system
The physical host information for the deployment is as follows:
node name
Log in Operation IP Address
Internal Component Communication IP Address
OS version
Openstack version
controller
10.45.5.155
192.168.159.128
CentOS7.2
mitaka
compute
10.45.6.196
192.168.159.129
CentOS7.2
mitaka
compute1
10.45.6.191
192.168.159.130
CentOS7.2
mitaka
All hosts are deployed with all service components of openstack, facilitating high availability deployment.
2 Openstack components HA implementation 2.1 Keystone components High availability
1) keystone-api(httpd)
High availability implementation:
pacemaker+corosync: Generate floating address through pacemaker, 3 nodes will start keystone service listening directly at 0.0.0.0, floating address switches between nodes, and only one node provides service.
Haproxy: Generate floating addresses through pacemaker, 3 nodes will directly start keystone service listening on the internal communication ip of each node, and then start listening on the floating ip through haproxy, provide services to the outside in a unified way, and distribute requests to the following 3 physical nodes.
Legacy problem: A-A Load Balancer mode cannot be achieved with haproxy, and token information will be confused. Therefore, only one active node can be configured in haproxy, and other nodes are backup.
2.2 glance component high availability
1) glance-api, glance-registry
High availability implementation:
pacemaker+corosync: Generate floating address through pacemaker, 3 nodes will API and registry service listening directly start at 0.0.0.0, floating address switches between nodes, and only one node provides service.
Haproxy: Generate floating addresses through pacemaker, 3 nodes will directly start API and registry service listening on the internal communication ip of each node, and then start listening on the floating ip through haproxy, provide services to the outside in a unified way, and distribute requests to the following 3 physical nodes to achieve A-A mode redundancy.
2)glance backend storage
High availability implementation:
Swift: The backend connects to the floating ip stored in the Swift object, relying on the high availability of Swift itself to realize the HA stored in the glance backend.
Remaining problems: None
2.3 nova component high availability
1) nova-api, nova-novncproxy
High availability implementation:
pacemaker+corosync: Generate floating address through pacemaker, 3 nodes will api and vncproxy service listening directly start at 0.0.0.0, floating address switches between nodes, and only one node provides service.
Haproxy: Generate floating addresses through pacemaker, 3 nodes will start API and vncproxy service listening directly on the internal communication ip of each node, and start listening on floating ip through haproxy, provide services to the outside in a unified way, and distribute requests to the following 3 physical nodes to realize A-A mode redundancy.
2) instance
High availability implementation:
instance live migrate: Implement online migration of instances between compute nodes through the live migrate feature. (Similar to vmotion in vSphere)
instance evacuate: The nova-evacuate component enables instance to be restarted from other nodes in the event of a compute node outage.
Legacy issues: There is currently no reliable way to automatically trigger instance evacuation in case of host failure. (Implement vSphere HA like functionality)
2.4 Cinder component high availability
1) cinder-api
High availability implementation:
pacemaker+corosync: Generate floating address through pacemaker, 3 nodes will directly start api service listening at 0.0.0.0, floating address switches between nodes, and only one node provides service.
Haproxy: Generate floating addresses through pacemaker, 3 nodes will directly start API service listening on the internal communication ip of each node, start listening on floating ip through haproxy, provide services to the outside in a unified way, and distribute them to the following 3 physical nodes, requesting A-A mode redundancy.
2) cinder-volume
High availability implementation:
Cinder migrate: Connect the same magnetic array on the backend by deploying a cinder-volume service on multiple nodes. When a problem occurs in one of the cinder-volumes, such as host downtime or storage link failure, you can use the cylinder migrate to change the volume host of the volume host of the downed cylinder node to a normal host, and you can access the storage again.
Remaining issues:
1. There is currently no reliable solution to detect and automatically switch over the state of cinder-volume service, such as failure to monitor storage links.
2. Unable to configure online copy migration of volumes across backends temporarily (implementing functionality similar to Storage Vmotion in vSphere)
2.5 neutron component high availability
1) neutron-server
High availability implementation:
pacemaker+corosync: Generate floating address through pacemaker, 3 nodes will start neutron-server service listening directly at 0.0.0.0, floating address switches between nodes, and only one node provides service at the same time.
Haproxy: Generate floating addresses through pacemaker, 3 nodes will start neutron-server service listening directly on the internal communication ip of each node, start listening on floating ip through haproxy, provide services to the outside in a unified way, and distribute requests to the following 3 physical nodes to achieve A-A mode redundancy.
2) l3 router
High availability implementation:
keepalived+vrrp: to be tested
Remaining issues:
1. If we want to copy our current vmware networking mode to openstack, we may not be able to fit in, so we need to discuss it together.
2.6 Swift components are highly available
1) proxy-server
High availability implementation:
pacemaker+corosync: Generate floating address through pacemaker, 3 nodes start proxy-server service listening directly at 0.0.0.0, floating address switches between nodes, and only one node provides service.
Haproxy: Generate floating addresses through pacemaker, 3 nodes will directly start keystone service listening on the internal communication ip of each node, and start listening on floating ip through haproxy, provide services to the outside in a unified way, and distribute requests to the following 3 physical nodes to achieve A-A mode redundancy.
Remaining problems: None
2.7 horizon component high availability
1) dashboard
High availability implementation:
pacemaker+corosync: Generate floating address through pacemaker, 3 nodes will directly start dashboard web service listening at 0.0.0.0, floating address switches between nodes, and only one node provides service.
Haproxy: Generate floating addresses through pacemaker, 3 nodes will directly start dashboard web service listening on the internal communication ip of each node, start listening on floating ip through haproxy, provide services to the outside in a unified way, and distribute requests to the following 3 physical nodes to achieve A-A mode redundancy.
Remaining problems: None
2.8 MariaDB High Availability
galera cluster: MariaDB database is installed on all three nodes, multi-node multi-master cluster is created through galera cluster, floating address is generated through pacemaker, switching between nodes is performed, and only one database node provides services.
Legacy problem: There are examples of using haproxy to hang galera cluster in the official ha-guide, but haproxy cannot be used for front-end distribution temporarily in the actual configuration. The port monitored by haproxy cannot connect to the database, and the reason has not been found yet.
2.9 RabbitMQ High Availability
rabbitmq internal cluster: rabbitmq internally provides and native clustering mechanism, which can join multiple nodes into a cluster and synchronize message queue data over the network. In addition, other components of openstack also provide redundant message queue configuration options internally. When configuring message queue addresses, add the addresses and ports of three nodes at the same time.
Remaining problems: None
2.10 Memcached High Availability
original supported by openstack: openstack natively supports memcached A-A multipoint configuration, similar to rabbitmq, just need to configure the addresses of all memcached nodes in the configuration item
Remaining problems: None
3 summarizes
According to the above test conclusions, the HA mechanism implementation matrix of each component is obtained as follows:
system module
service module
pacemaker+corosync
haproxy
other mechanisms
remarks
keystone authentication module
keystone-api
√
√
haproxy temporarily does not support Load Balancer mode
glance mirror module
glance-api
√
√
glance-registry
√
√
glance backend storage
×
×
swift
nova computing module
nova-api
√
√
nova-novncproxy
√
√
instance
×
×
nova migrate
nova evacuate
Temporarily unable to achieve automatic evacuation in case of failure
cinder block memory module
cinder-api
√
√
cinder-volume
×
×
cinder migrate
Temporarily unable to achieve automatic migration in case of failure
neutron network module
neutron-server
√
√
L3 router
×
×
Keepalived+vrrp
Router redundancy scheme to be tested
Openstack networking solutions need to be discussed
swift object storage module
proxy-server
√
√
horizon foreground management interface
dashboard
√
√
mariadb background sql database
mariadb
√
×
galera cluster
According to the haproxy configuration method in the official ha guide, the client cannot connect to the database
rabbitmq message queue
rabbitmq
×
×
Built-in cluster mechanism
memcached cache system
memcached
×
×
Openstack natively supports multiple memcached servers
About OpenStack M version based on the components of the high availability solution to explore what kind of questions to share here, I hope the above content can be of some help to everyone, if you still have a lot of doubts not solved, you can pay attention to the industry information channel to learn more about knowledge.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.