In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-08 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail what are the highlights of the Ussuri release of the OpenStack version. The editor thinks it is very practical, so I share it for you as a reference. I hope you can get something after reading this article.
Preface
OpenStack is not bound by any manufacturer and is flexible and free. Currently, it can be considered one of the preferred solutions for cloud solutions. Currently 83% of private cloud users turn to OpenStack because it frees users from excessive dependence on a single public cloud. In fact, OpenStack users often rely on public clouds, such as Amazon Web Services (AWS) (44%), Microsoft Azure (28%) or Google Compute Engine (GCP) (24%). 58% of the user infrastructure is driven by OpenStack.
In addition, one reason why some users may not see the impact of OpenStack is that at least half of all OpenStack is deployed in China. There, Huawei (28%) and EasyStack (22%). Outside its market, 20 per cent of Red Hat, 16 per cent of Canonical and 5 per cent of Mirantis are dominant. SUSE mixes with the others about 3%.
To maintain this progress, the 21st version of Ussuri, the most widely deployed open source cloud infrastructure software, will be released on May 13. This release will be a little later than expected.
OpenStack code contributions such as stackalytics (https://www.stackalytics.com/)).
Cinder- Block Storage Service
The Cinder interface provides standard features that allow you to create and attach block devices to virtual machines, such as "create volumes", "delete volumes" and "attach volumes". There are more advanced features that support the ability to expand capacity, snapshots and create virtual machine mirror clones.
Notes:
Improvements to current functionality, such as setting minimum and maximum capabilities for volume types, and the ability to use time comparison operators to filter volume lists.
When Glance multi-storage and mirror data synchronization are supported, upload a volume to the mirror service.
Some new back-end drivers have been added, as well as a variety of features to the current driver.
Cyborg-hardware Management Accelerator
Cyborg (formerly known as Nomad) is designed to speed up resources (i.e. FPGA,GPU,SoC, NVMe SSD,DPDK/SPDK,eBPF/XDP …) Provide a common management framework
Notes:
The user can now start an instance of the accelerator managed by CyBrg because Nova-Cyborg integration has been completed. See the accelerator operations guide to find out which instance operations are supported.
A new API has been implemented to list devices managed by Cyborg, and a list of accelerators can usually be viewed and managed.
CyBrg provides the basis for using microversions in V2API to provide compatibility for future versions.
The CyBrg client is now based on OpenStack SDK and the most supported v2 version of API.
Improve the overall quality by adding more unit / functional tests and reducing technical complexity.
Glance-Mirror Service
Glance (OpenStack Image Service) is a service that provides discovery, registration, and download of images. Glance provides centralized storage of virtual machine images. Through Glance's RESTful API, you can query image metadata and download images. Virtual machine images can be easily stored in a variety of places, from simple file systems to object storage systems (such as OpenStack Swift).
Notes:
Multiple stores have been enhanced so that users can now import a single image in multiple stores, copy existing images in multiple stores, and delete images from a single store.
New import plug-in unzips the image
The S3 driver was introduced again for storage
Horizon-graphical management service
Horizon provides a WEB front-end management interface (UI service) for Openstack. Through the DashBoard service provided by Horizon, administrators can use WEB UI to manage the overall cloud environment of Openstack, and can directly see the results and running status of various operations.
Notes:
This version mainly focuses on bug repair and improvement from a maintenance perspective, including the abandonment of old functions, the removal of deprecated functions, the improvement of integration test coverage, the use of simulations in migrating to unit tests, and so on.
Horizon and all Horizon plug-ins now support Django 2.2, which is the only LTS supported by Django. Django is the framework on which Horizon depends. Please note that python 2.7 is no longer supported, and we have entered the python3 era.
Several functions are implemented in the Keystone module: allowing users to change expired passwords, including first login in the user panel, password lock options, and support for access rules for application credentials.
Ironic-bare Metal Service
Ironic includes an API and several plug-ins for providing physical servers securely and with fault tolerance. It can be used in conjunction with nova as a hypervisor driver, or with bifrost as a stand-alone service. By default, it uses PXE and IPMI to interact with bare metal machines. Ironic also supports the use of vendor plug-ins to implement additional functionality.
Notes:
Support introverted rules that allow each subset of nodes to have (and retain) rules, such as different hardware deliveries.
Support the hardware phase-out workflow (retirement workflow) to automate hardware decommissioning in the managed cloud.
The multi-tenant concept and additional policy options are available to non-administrators.
Authentication for interaction between Ironic and its remote agents is supplemented so that it can be deployed on untrusted networks.
UEFI and device selection are now available for software RAID.
Keystone-identity authentication service
Keystone (OpenStack Identity Service) is a component of the OpenStack framework responsible for managing authentication, service access rules, and service token functions. Users need to verify their identity and permissions to access resources, and the operation of the service also needs permission detection, all of which need to be dealt with through Keystone. Keystone is similar to a service bus, or the registry of the entire Openstack framework. OpenStack services register their Endpoint (service access URL) through Keystone. Any mutual invocation between services needs to be authenticated by Keystone to obtain the Endpoint of the target service, and then called.
Notes:
When using the federated authentication method, the user experience of creating application credentials and trust has been greatly improved. A federated user whose role assignment comes from mapped group membership will persist those group memberships to a configurable TTL after their tokens expire, during which time their application credentials will remain valid.
The Keystone to Keystone assertion now contains the group membership of the user on Keystone Identity Provider, which can be mapped to group membership on Keystone Service Provider.
Federated Federated users can now be assigned specific roles without relying on mapping API, allowing federated users to create and link to their Identity Provider directly in keystone.
When you start a new keystone deployment, the administrator role defaults to having the "immutable" option set, preventing it from being accidentally deleted or modified unless the "immutable" option is deliberately deleted.
Keystonemiddleware no longer supports Identity v2.0 API, which was removed from keystone in the previous release cycle.
Kolla-containerized deployment of OpenStack services
Kolla's mission is to provide production-level, out-of-the-box delivery capabilities for openstack cloud platforms. The basic idea of kolla is that everything is a container, running all services based on Docker, and ensuring that a container runs only one service (process) to run docker with minimum granularity.
Notes:
All images, scripts, and Ansible playbooks now use Python 3, and support for Python 2 has been removed.
Added support for CentOS 8 hosts and mirrors.
Added initial support for TLS encryption for back-end API services, providing end-to-end encryption of API traffic. Keystone is currently supported.
Added support for deploying an open virtual network (OVN) and integrating it with neutron.
Added support for deploying Zun CNI (Container Network Interface) components to allow Docker with containers to support Zun Pods.
Support for Elasticsearch Curator has been added to help manage cluster log data.
Added components required to use Mellanox network devices with Neutron.
Simplifies the configuration of external Ceph integration, making it easy to transition from a Ceph cluster deployed by Ceph-Ansible to enabling it in OpenStack.
Kuryr-OpenStack container network
Kubernetes Kuryr is a sub-project of OpenStack Neutron, and its main goal is to integrate the network of OpenStack and Kubernetes through this project. The project implements a native Neutron-based network in Kubernetes, so using Kuryr-Kubernetes allows your OpenStack VM and Kubernetes Pods to choose to operate on the same subnet, and to use Neutron's L3 and Security Group to route the network and block specific source Port.
Notes:
Support for IPv6
DPDK support for nested settings and various other DPDK and SR-IOV improvements.
Multiple fixes related to NetworkPolicy support.
Manila-File sharing service
The full name of Manila project is File Share Service, and file sharing is a service. Is one of the sub-projects under the OpenStack tent mode, which is used to provide file sharing on the cloud and supports CIFS protocol and NFS protocol.
Notes:
The sharing group has grown from an experimental feature to universal availability. Starting with API version 2.55, X-OpenStack-Manila-API-Experimental header is no longer required to create / update / delete shared group types, group specifications, group quotas, and shared groups themselves.
When compatible, shares can be created from snapshots in the storage pool. This new feature allows you to make better use of back-end resources by extending workloads that were previously limited to hosting snapshots.
A new quota control mechanism has been introduced to limit the number and size of shared replicas that projects and their users can create.
You can now use time intervals to query asynchronous user messages.
Neutron-Network Servic
Neutron is one of the core projects of openstack, which provides virtual network functions in cloud computing environment. The OpenStack Network (neutron) manages the access layer of all virtual network infrastructure (VNI) and physical network infrastructure (PNI) in the OpenStack environment.
Notes:
This OVN drive is now merged into the neutron repository and is one of the neutron tree ML2 drives. The benefits of linuxbridge or openvswitch .OVN drivers outweigh openvswitch drivers including, for example, DVR with distributed SNAT traffic, distributed DHCP, and the possibility that it can run without network nodes. Other ML2 drivers are still on the tree and are fully supported. The current default agent is still the open switch, but our plan is that the OVN driver will be the default choice in the future.
Support for stateless security groups has been added. Users can now create a security group set to stateless, which means that CONTROM will not be used for any rules in that group. A port can only use stateless or stateful security groups. In some use cases, stateless security groups will allow operators to choose optimized data path performance, while stateful security groups impose additional processing on the system.
Role-based access control RBAC has added an address range and subnet pool. Address ranges and subnet pools are usually defined by operators and exposed to users. This change allows operators to use more granular access controls on address ranges and subnet pools.
Support for tagging resources during creation has been added to Neutron API. Users can now mark resources (such as ports) directly in the POST request. This will greatly improve the performance of kubernetes network operations. For example, the number of API calls that Kuryr must send to Neutron is greatly reduced.
Nova-Computing Services
Nova is the computing organization controller in the OpenStack cloud. All activities that support the lifecycle of instances in the OpenStack cloud (instances) are handled by Nova. This makes Nova a platform responsible for managing computing resources, network, authentication, and required scalability.
Notes:
Cold migration and resizing between Nova Cell are supported.
Support previewing and mirroring to the Nova computing host.
Support the creation of instances with accelerator devices through CyBorg.
Further support the mobile server to guarantee the minimum bandwidth.
Nova-manage placement auditing CLI is supported to find and clean up orphaned resource allocations.
The Nova API policy is introducing new default roles using the scope_type type. These new changes improve the level of security and manageability. The new policy has richer functions in dealing with access system and project-level tokens, with "read" and "write" roles. This feature is disabled by default and can be enabled through configuration options. For more details, see the Policy Concepts documentation.
Octavia-load balancer service
Octavia is a daemon supported by openstack lbaas that provides load balancing for virtual machine traffic. In essence, it is similar to trove, calling nova and neutron's api to generate a virtual machine with haproxy and keepalived software installed and connected to the target network.
Notes:
Octavia now supports the deployment of load balancers in specific availability zones. This allows load balancing to be deployed to the edge environment.
The Octavia amphora driver adds a technology preview feature to improve the resilience of the control plane. If the control plane host fails during the load balancer configuration operation, the standby controller can restore the in-process configuration and complete the request.
Users can now specify TLS keys acceptable to listeners and pools. This allows the load balancer to enforce security compliance requirements.
Placement-Placement Service
A resource provider can be a compute node, a shared storage pool, or an IP allocation pool. The placement service tracks the inventory and usage of each vendor. For example, to create an instance of consumable resources on a compute node, such as the CPU and memory of the resource provider of the compute node, the disk shares the storage pool resource provider and the IP address from the external IP resource provider.
Notes:
"by configuring a configurable retry count, robustness is improved to deal with common high-level parallel write allocation situations, such as busy cluster hypervisors."
Puppet Openstack-OpenStack Puppet installation module
The Puppet module for OpenStack brings scalable and reliable IT automation to OpenStack cloud deployments.
Notes:
Puppet OpenStack can now boot Keystone with an administrator password instead of a legacy administrator token.
Swift-object Stora
Swift is not a file system or real-time data storage system, but object storage for long-term storage of permanent types of static data. This data can be retrieved, adjusted, and updated if necessary. Swift is best suited for storing data such as virtual machine images, pictures, mail, and archive backups.
Notes:
A new system namespace has been added for Swift containers and objects.
A new Swift object versioning API has been added using the new namespace.
Support for S3 version control has been added with the new API.
The ability to use SIGUSR1 to perform "seamless" reinstallation has been added, and WSGI server sockets never stop accepting connections.
Added the ability to perform a "seamless" overload using SIGUSR1, where the WSGI server socket never stops accepting connections.
Vitrage-platform problem location Analysis Service
Vitrage is an OpenStack RCA (Root Cause Analysis) service that organizes, analyzes, and extends OpenStack alerts and events to find the root cause before a real problem occurs.
Notes:
A more concise and friendly Template Version 3 syntax has been added.
Watcher-Resource Optimization Service
Watcher provides a complete chain of optimization cycles: from metrics receivers to optimization processors and operation planning applications. The goal of Watcher is to provide a powerful framework to achieve a wide range of cloud optimization goals, including reducing data center operating costs, improving system performance through intelligent virtual machine migration, and improving energy efficiency. In addition, Watcher allows users to customize rich resource optimization objectives and strategy algorithms.
Notes:
New webhook API and new audit type EVENT have been added. Watcher users can now create audits using the EVENT type, which will be triggered by webhook API.
The construction of the Nova data model will be done using the decision engine thread pool, greatly reducing the total time required to build the model.
Zaqar-message Service
Zaqar is a multi-tenant cloud message service component in openstack, which aligns and draws lessons from the implementation of Amazon SQS message components. It provides a channel for building scalable, reliable and high-performance cloud applications within Openstack. Zaqar services are characterized by a full RESTful API that uses producer / consumer, publisher / subscriber patterns to transmit messages. Developers can send messages between their SaaS and various components on their mobile applications by using different communication modes.
Notes:
Support using "with_count" to query queues to return the number of queues. Help users quickly get the exact total number of queues they own.
Introduce a new resource called Topic, which is the concept of SNS. The user can send the message to the subject, and then the subscriber will get the message according to different protocols (for example, http, email, SMS, etc.).
Zun-Container Service
As a component that provides container management services, Zun allows users to quickly start and operate management containers without the participation of a management server or cluster. It integrates core OpenStack services such as Neutron, Cinder, Keystone and so on, and realizes the rapid popularization of containers. In this way, all of OpenStack's original network, storage, and identification verification tools are applied to the container system, enabling the container to meet security and compliance requirements. The Zun program supports a variety of container technologies such as Docker, Rkt, and clear container. Support for Docker technology is now complete.
Notes:
Starting from this release, Zun has added support with CRI-compatible runtime. Zun uses CRI runtime to implement the concept of pod. Therefore, users can use Zun API to create pods in the Kata container through CRI runtime.
Quick installation and testing using devstack
DevStack is a series of extensible scripts for quickly installing a complete OpenStack environment for all the latest versions of git master. It is used interactively as the development environment and as the basis for functional testing of OpenStack projects.
Install Linux
First prepare a clean and minimized Linux system. DevStack attempts to support the two latest LTS versions of Ubuntu, the latest and the current Fedora version, CentOS/RHEL 7, as well as Debian and OpenSUSE.
If you have special requirements, Ubuntu 18.04 (Bionic Beaver) will be the most tested and probably the smoothest.
Add Stack User
DevStack should run as a non-root user with sudo enabled (usually you can log in to the cloud image normally, such as "ubuntu" or "cloud-user").
If you do not use a cloud image, you can create a separate stack user to run DevStack
$sudo useradd-s / bin/bash-d / opt/stack-m stack
Since this user will make changes to your system, it should have sudo privileges:
$echo "stack ALL= (ALL) NOPASSWD: ALL" | sudo tee / etc/sudoers.d/stack$ sudo su-stack
Download DevStack
$echo "stack ALL= (ALL) NOPASSWD: ALL" | sudo tee / etc/sudoers.d/stack $sudo su-stack
The storage devstack repo contains a script that is primarily used to install OpenStack and templates for configuration files.
Creating local.conf
Local.conf creates a file with four passwords in the root directory of devstack git repo.
[[local | localrc]] ADMIN_PASSWORD=secret DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD
This is the minimum configuration required to start using DevStack.
Start installation
$. / stack.sh
This will take 15-20 minutes, depending largely on the speed of the network. Many software packages will be installed during this process.
This is the end of the article on "what are the highlights of the release of the OpenStack version of Ussuri". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it out for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.