In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Analysis and practice of OpenStack Architecture
OpenStack is evolving rapidly at the rate of two versions a year, so the architecture of OpenStack is also moving forward. Looking back at the E version of OpenStack, it has only five components: Nova, Galnce, Swift, Horizon and Keystone;. When it develops to the F version, its core components grow to 7, which is more than the E version of Neutron and Cinder. They realize the functions of Compute Network and Compute Volume respectively, but it can be seen from the subsequent development that they far exceed the functions supported by Compute Network and Compute Volume.
I. the design idea of business architecture
What OpenStck does well is that the architecture design is compared. For different modules, the business architecture design generally meets the following design ideas:
REST API receives external request
Scheduler is responsible for scheduling services.
Worker is responsible for task assignment.
Driver is responsible for task implementation.
Message queuing is responsible for internal communication between components.
Database service
Next, we will introduce the above design ideas respectively.
REST API receives external requests. The logical relationship in OpenStack is realized by the information transmission between various components, while the message transmission and interaction between different components is carried out through the REST API of each component. It can be understood that REST API is the entrance to all services. It receives REST requests from customers externally and forwards requests internally.
Another advantage of REST API is that it can hide the internal implementation details and provide a unified access interface. Because of its modular design, REST API can easily integrate with third-party systems; in large-scale environment, REST API can adopt distributed deployment, which greatly improves the high availability of API.
Scheduler is responsible for scheduling services. The Scheduler mentioned here is just a general term, which does not mean that there is a xxx-scheduler service in every component, but most components have a service that provides similar functions, which may exist alone or softly together with other services to provide services.
Let's take Nova as an example to illustrate the scheduling service of Scheduler. When we create a virtual machine, we often need to select the appropriate computing node, and then create the virtual machine on this node, then the node filtering needs the help of the scheduling function of nova-scheduler.
Worker is responsible for the work service. The Scheduler mentioned above is only responsible for the assignment of tasks, it is a bit similar to the project manager in the company, it will coordinate everyone's work and assign the work to the right people to do it, and Worker is the service that really performs the relevant tasks. For example, in Nova, Worker is a nova-compute service; in Heat, heat-engine is Worker, and in many components, we can think of xxx-engine as a different Worker.
When we separate the Scheduler responsible for scheduling and the Worker responsible for work, it makes OpenStack easier to expand, which allows us to consider from different aspects how to improve the concurrency of the system and respond to large-scale request scenarios.
Driver is responsible for task implementation. In order to embrace different technologies, OpenStack uses a large number of Driver. For example, in the Nova component, the nova-compute service can support a variety of different Hypervisor, which can be configured through the configuration file as needed, and when the configuration file is modified, you only need to restart the service; in the Glance component, it supports a variety of storage backends, such as local file system, Ceph, Cinder and Swift.
In fact, to put it bluntly, the reason why we can support a variety of different Driver, because different components will have their own Driver framework, users only need to configure the Driver to meet their needs. The existence of Driver framework also reduces the requirements of upper-level developers for low-level knowledge. Upper-level developers do not need to care about how the underlying Driver is implemented, and the implementation of Diver can be left to specialized personnel.
Message queuing is responsible for communication within components. Through the study of the previous chapters, I believe that we will not feel strange to this, the generation of message queues, greatly decoupled the different services of the same component, so that they can achieve distributed independent deployment. In production, we often use message queue invocation methods: synchronous invocation and asynchronous invocation.
Synchronous invocation, from the perspective of invocation relationship, is that REST API directly invokes the internal services of the component. In the case of Nova, synchronous invocation means that nova-api services directly call nova-scheduler and wait for the result to be returned. In this way, the REST API service will always be waiting when the back-end does not return a response.
Asynchronous invocation, as opposed to synchronous invocation, that is, when a REST request is made, the sender does not wait for a response from the receiver but returns directly.
Database service. Each component in OpenStack needs to maintain its own state information, so the back end of each component will have a database service corresponding to it.
Second, deployment architecture design ideas
The modular business architecture greatly decouples different components, which makes it possible to decouple different services of the same component. In this way, not only different components can achieve distributed deployment, but even different services of the same component can also achieve distributed deployment. In recent years, with the interest of container technology, the design of OpenStack component separation and service modularization is easier to achieve containerized deployment.
Tip: there is already a more mature containerized deployment solution for OpenStack deployment in the community, and the name of this project is Kolla.
Putting aside containerization, let's take a look at what kind of deployment design ideas OpenStack has.
The previous section analyzes the business design architecture of OpenStack from the logic relationship and the communication relationship between each other, which belongs to the upper software logic architecture. As we all know, OpenStack is a distributed system, so it has to solve the problem of mapping between the upper logic and the underlying physical architecture. We also need to consider how to reasonably install different components to the actual physical server, how to deploy different services for the same component, and so on.
The deployment of OpenStack can be roughly divided into two types:
All-in-One deployment. It is suitable for development environment and learning environment. Because of the rapid development of OpenStack, if we are interested in a component, we can quickly build an OpenStack environment with this component in this way. At present, there are two main tools for quick building: DevStack and RDO. In addition, it is possible to build quickly through Fuel, but this approach is cumbersome in use.
Distributed deployment. It can also be called cluster deployment, that is, different components and different services of the same component can be deployed on the same physical server or to different physical servers according to the needs of the implementation.
Although we have mentioned two deployment methods here, the deployment of OpenStack is not immutable, but needs to design different landing solutions according to the actual production practice requirements. When deploying in real production, computing, network, storage and other resources in OpenStack need to be planned in advance, and two deployment architectures are extended for different planning schemes: simple deployment architecture and complex deployment architecture.
Simple deployment architecture
This is a simple deployment scheme to meet the simple production environment, for the deployment of such a scheme, the general node role is relatively simple, and the network setting is not particularly complex. Its architecture is designed as follows:
1. Node role
Only control nodes, compute nodes, and storage nodes are included. For most of the services of OpenStack, they are deployed on the control node, such as authentication service, mirror service, etc. The control node can also be called the management node, which is mainly used to schedule related services on this node and other nodes. The computing node refers to the physical server where the virtual machine runs. Storage nodes are mainly used to provide storage services, especially distributed storage such as Ceph, where storage is generally deployed separately on storage nodes.
There is no mention of network nodes. In fact, network nodes are very important in OpenStack. If there is something wrong with the network, many services will not work properly. Why is the role of "network node" not mentioned separately here, because network nodes can often be deployed with control nodes. In addition, storage nodes must be deployed separately, depending on what storage structure we use. Generally speaking, storage nodes can also be deployed with compute nodes, for example, when we use Sheepdog as back-end storage, we can deploy them with compute nodes.
two。 Network deployment
Although the network can be deployed separately or with other nodes, it still needs to be explained separately here, because the benefits of network planning greatly affect the stability, security and maintainability of the entire cloud platform.
Here, let's call the node where the network service is deployed as the network node (although it may overlap with the node roles mentioned above). The deployment of network nodes is generally divided into three categories: management network, storage network, data network and Internet.
Management network. It is responsible for managing the communication between nodes and other nodes, and the management node can manage other services through this network.
Storage network. It is the network where the computing node accesses the storage node, and other nodes will use this network when reading and writing data to the storage device.
Data network. Also known as an intranet, it is used for communication between OpenStack internal components. On the same physical server in the cloud platform, there may be a large number of virtual machines at the same time, and different virtual machines need to communicate with each other, so such internal communication needs to go through the data network.
Surf the Internet. Some people call it an external network, a network that can provide services to the outside world.
The above is the simple deployment architecture, from here, although it is simple deployment, but still follow the idea of distributed deployment, components, services distributed deployment, the network is divided and deployed by function. The above simple deployment can meet the users who do not have high requirements for the cloud platform. In the above deployment scheme, the problem of high availability is not considered, and the advanced and complex features such as multi-region problems are not considered.
3. Complex deployment architecture
It can be said that this kind of deployment is designed and implemented on the basis of the former deployment. In view of the above simple deployment architecture, the first problem to be considered when designing a complex architecture is the problem of high availability. There are many ways to achieve high availability. The same component can be deployed to different nodes to achieve high availability. High visibility can also be achieved through third-party tools. Pacemaker + Cronsyc is a common way of high availability. The former implements the management of resources, while the latter provides communication services for the former. After the emergence of container technology, the community has also taken a positive attitude, with the help of the high-availability architecture of K8S can also achieve the high availability of OpenStack, this deployment scheme requires the deployment of OpenStack components to the container.
In addition, large-scale and high concurrency situations need to be considered. For large-scale scenarios, we need to deploy management services to different physical nodes, and we can also consider stripping different services of the same component to achieve distributed deployment, so that it is easy to scale different OpenStack services horizontally. For example, at some point, a large number of requests to create virtual machines arrive, and we can scale up the related Nova services (nova-api, nova-scheduler, nova-conductor and nova-compute) to improve our ability to cope with high concurrency.
III. User role design of the platform
There are many functions in OpenStack, which can provide us with IaaS services, but a complete IaaS service needs to provide the following functions:
The platform can be billed.
Allow the owner of the platform to register services in the platform
Allow developers and operators to create and store their own custom images
Allows operations administrators to configure and modify the infrastructure of the platform
Based on the above features, we can represent the most basic platform user roles of OpenStack as shown in the figure.
Figure: the corresponding relationship between user roles and functions
There are four types of user roles in the platform: developers, operators, Owner and operators, and each person is assigned the required functions.
For a complete cloud platform, the design is inseparable from the three types of designs mentioned above. Friendly role design, good business architecture and flexible deployment methods are issues that must be carefully considered at the beginning of cloud platform design. The lack of one of the three will greatly reduce the ease of use and maneuverability of the platform.
Tip: for roles, different cloud platforms can customize some roles according to their own conditions, so as to achieve the purpose of isolating resources for different people, thus increasing the security of the cloud platform.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.