Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What if a new node in OpenStack reports an error Failed to create resource provider?

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail what to do about the error report Failed to create resource provider of the new node in OpenStack. The editor thinks it is very practical, so I share it for you as a reference. I hope you can get something after reading this article.

1. Background

The failure and downtime of a computing node causes the computing node to be unable to continue to work. Move it out of the cluster first, and then rejoin the cluster after the node is restored. Find an error ResourceProviderCreationFailed: Failed to create resource provider

2. Error message # vim nova-compute.log2019-07-16 16 vim nova-compute.log2019 27purl 55.441 1166754 ERROR nova.scheduler.client.report [req-c50f65e8-ffd8-4a10-8d5e-0ec8d408a3c8 -] [req-9e5aad63-21d1-4297-be27-92ba9b8bfe9f] Failed to create resource provider record in placement API for UUID 4d9ed4b4-f3a2-4e5d-9d8e-2f657a844a04. Got 409: {"errors": [{"status": 409, "request_id": "req-9e5aad63-21d1-4297-be27-92ba9b8bfe9f", "detail": "There was a conflict when trying to complete your request.\ n\ n Conflicting resource provider name: bdc2 already exists. " "title": "Conflict"}]}. 2019-07-16 16 title 27 title 55.442 1166754 ERROR nova.compute.manager [req-c50f65e8-ffd8-4a10-8d5e-0ec8d408a3c8 -] Error updating resources for node bdc2.: ResourceProviderCreationFailed: Failed to create resource provider bdc22019-07-16 16 16 purge 27 title 55.442 1166754 ERROR nova.compute.manager Traceback (most recent call last): 2019-07-16 16 purge 2715 55.442 1166754 ERROR nova.compute.manager File "/ usr / lib/python2.7/site-packages/nova/compute/manager.py " Line 7426, in update_available_resource_for_node2019-07-16 16 line 27 ERROR nova.compute.manager rt.update_available_resource 55.442 1166754 ERROR nova.compute.manager rt.update_available_resource (context, nodename) 2019-07-16 16 14 14 27 line 55.442 1166754 ERROR nova.compute.manager File "/ usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 688, in update_available_resource2019-07-16 16 16 line 55.442 1166754 ERROR nova.compute.manager self._update_available_resource (context) Resources) 2019-07-16 16 usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py 27 resources 55.442 1166754 ERROR nova.compute.manager File "/ usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner2019-07-16 16 14 14 27 resources 55.442 1166754 ERROR nova.compute.manager return f (* args, * * kwargs) 2019-07-16 16 16 14 May 55.442 1166754 ERROR nova.compute.manager File "/ usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 712 In _ update_available_resource2019-07-16 16 in 27 ERROR nova.compute.manager self._init_compute_node 55.442 1166754 ERROR nova.compute.manager self._init_compute_node (context, resources) 2019-07-16 16 V 27 V 55.442 1166754 ERROR nova.compute.manager File "/ usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _ init_compute_node2019-07-16 16 V 27 V 55.442 1166754 ERROR nova.compute.manager self._update (context) Cn) 2019-07-16 16 usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py 27 cn 55.442 1166754 ERROR nova.compute.manager File "/ usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 886, in _ update2019-07-16 16 16 14 cn 2715 55.442 1166754 ERROR nova.compute.manager inv_data,2019-07-16 16 16 V 27 55.442 1166754 ERROR nova.compute.manager File "/ usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 68 In set_inventory_for_provider2019-07-16 16 in set_inventory_for_provider2019 27 in set_inventory_for_provider2019 55.442 1166754 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid,2019-07-16 16 V 27 V 55.442 1166754 ERROR nova.compute.manager File "/ usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in _ run_method2019-07-16 16 16 14 V 27 V 55.442 1166754 ERROR nova.compute.manager return getattr (self.instance) _ _ name) (* args, * * kwargs) 2019-07-16 16 purl 2715 55.442 1166754 ERROR nova.compute.manager File "/ usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 1104 In set_inventory_for_provider2019-07-16 16 ERROR nova.compute.manager File 27 ERROR nova.compute.manager File "/ usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 665) 2019-07-16 16 Switzerland 27 ERROR nova.compute.manager File 55.442 1166754 ERROR nova.compute.manager File "/ usr/lib/python2.7/site-packages/nova/scheduler/client/report.py") In _ ensure_resource_provider2019-07-16 16 in 27 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid 55.442 1166754 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) 2019-07-16 16 V 27 V 55.442 1166754 ERROR nova.compute.manager File "/ usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 64, in wrapper2019-07-16 16 V 27 V 55.442 1166754 ERROR nova.compute.manager return f (self, * a) * * k) 2019-07-16 16 line 2715 55.442 1166754 ERROR nova.compute.manager File "/ usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", 612 In _ create_resource_provider2019-07-16 16 purge 27 ERROR nova.compute.manager raise exception.ResourceProviderCreationFailed 55.442 1166754 ERROR nova.compute.manager raise exception.ResourceProviderCreationFailed (name=name) 2019-07-16 16 purge 27in 55.442 1166754 ERROR nova.compute.manager ResourceProviderCreationFailed: Failed to create resource provider bdc22019-07-16 16 purge 27purl 55.442 1166754 ERROR nova.compute.manager3, process 3.1, problem one uuid conflict

For specific error information, please see Conflicting resource provider name: bdc2 already exists. When you remove bdc2, make sure that the service and computer-node in the nova library are cleared, including the metadata is also delete, but there is still metadata information.

Found that the cell library has not been deleted.

# nova-manage cell_v2 list_hosts / usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option (s) ['use_tpool'] not supported exception.NotSupportedWarning+-+--+- -+ | Cell Name | Cell UUID | Hostname | +-+ | cell1 | df0d7c04-52b3-454d-a295-4f4ad836526b | bdc1 | | cell1 | df0d7c04-52b3-454d -a295-4f4ad836526b | bdc2 | | cell1 | df0d7c04-52b3-454d-a295-4f4ad836526b | bdc3 | | cell1 | df0d7c04-52b3-454d-a295-4f4ad836526b | bdc4 | | cell1 | df0d7c04-52b3-454d-a295-4f4ad836526b | bdc5 | cell1 | df0d7c04-52b3-454d-a295-4f4ad836526b | bdc6 | cell1 | df0d7c04-52b3-454d-a295-4f4ad836526b | bdc7 | cell1 | df0d7c04-52b3-454d-a295-454d-a295 | 454d-a295 | 454d-a295 | + -+

So delete and add manually, and found that the error did not change.

# su-s / bin/sh-c "nova-manage cell_v2 delete_host-- cell_uuid df0d7c04-52b3-454d-a295-4f4ad836526b-- host bdc2" nova# su-s / bin/sh-c "nova-manage cell_v2 discover_hosts-- verbose" nova/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option (s) ['use_tpool'] not supported exception.NotSupportedWarningFound 2 cell mappings.Skipping cell0 since it does not Contain hosts.Getting compute nodes from cell 'cell1': df0d7c04-52b3-454d-a295-4f4ad836526bFound 0 unmapped computes in cell: df0d7c04-52b3-454d-a295-4f4ad836526b

The uuid indicated in the error report is 4d9ed4b4-f3a2-4e5d-9d8e-2f657a844a04 and bdc2 conflict. Check the Metabase, in addition to the nova library, there is also a nova_ API library.

MariaDB [nova_api] > select uuid,name from resource_providers where name='bdc2' +-- +-+ | uuid | name | +-- +-+ | e131e7c4-f7db-4889-8c34 -e750e7b129da | bdc2 | +-- +-+ MariaDB [nova_api] > select uuid Host from nova.compute_nodes where host='bdc2' +-- +-+ | uuid | host | +-- +-+ | 4d9ed4b4-f3a2-4e5d-9d8e -2f657a844a04 | bdc2 | +-- +

Seeing the crux of the problem, it is true that the uuid conflicts. E131e7c4-f7db-4889-8c34-e750e7b129da should be the uuid in the uuid manual update table resource_providers of the old bdc2.

MariaDB [nova_api] > update resource_providers set uuid='4d9ed4b4-f3a2-4e5dmur9d8eMur2f657a844a04' where name='bdc2' and uuid='e131e7c4-f7db-4889-8c34luthe750e7b129daxie3.2, problem two assignment conflict

The conflict has been solved here, but there are still exceptions in the new computing node. The new CVM created is not created on this new computing node, but the CVM with a small resource can be migrated. The log of the CVM nova-compute that cannot be migrated with large resources has been browsing warning all the time.

2019-07-16 19 4a4d-9b53 10 WARNING nova.compute.resource_tracker 02.684 1192779 [req-7c022b9b-7659-4a4d-9b53-30366a7fd150 -] Instance 6446a84d-cdfd-4cfe-bcd2-2d1d75db229f has been moved to another host bdc3 (bdc3). There are allocations remaining against the source host that might need to be removed: {upright resourcefulness: {upright VCPUBG: 8, upright MEMORY MB: 16384, upright disk GB: 50}}. 2019-07-16 19VCPUB: 02.738 1192779 WARNING nova.compute.resource_tracker [req-7c022b9b-7659-4a4d-9b53-30366a7fd150 -] Instance e0d8d6df-4b48-402b-aa33-97c4a6166c5b has been moved to another host bdc6 (bdc6). There are allocations remaining against the source host that might need to be removed: {upright resourceful: {upright VCPUBG: 6, upright MEMORY MB: 12288, upright disk GB: 50}}. 2019-07-16 19VCPUB: 02.791 1192779 WARNING nova.compute.resource_tracker [req-7c022b9b-7659-4a4d-9b53-30366a7fd150 -] Instance 9d860729-597a-4420-bb8f-e9415587d808 has been moved to another host bdc3 (bdc3). There are allocations remaining against the source host that might need to be removed: {upright resourceful: {upright VCPUBG: 4, upright MMMORY MB: 8192, upright DISKY GB: 50}}. 2019-07-16 19 req-7c022b9b-7659 10 WARNING nova.compute.resource_tracker 02.860 1192779 WARNING nova.compute.resource_tracker [req-7c022b9b-7659-4a4d-9b53-30366a7fd150 -] Instance 8e42328d-fd1c-4abc-acac-5c6e09623af6 has been moved to another host bdc5 (bdc5). There are allocations remaining against the source host that might need to be removed: {upright resourceful: {upright VCPUBG: 8, upright MMMORY MB: 16384, upright DISKGB: 50}}. 2019-07-16 19 req-7c022b9b-7659 10 WARNING nova.compute.resource_tracker 02.912 1192779 WARNING nova.compute.resource_tracker [req-7c022b9b-7659-4a4d-9b53-30366a7fd150 -] Instance 1d59e7db-bf1b-478c-a6bd-10287365cb65 has been moved to another host bdc3 (bdc3). There are allocations remaining against the source host that might need to be removed: {upright resources: {upright VCPUBG: 8, upright MEMORY installed MBUs: 16384 2019-07-16 19 4a4d-9b53 10V 02.960 1192779 INFO nova.compute.resource_tracker [req-7c022b9b-7659-4a4d-9b53-30366a7fd150 -] Instance 61223c2d-0b0c-4729-85e6-741c88e6e476 has allocations against this compute host but is not found in the database.2019-07-16 19 Vol 10V 03.014 1192779 WARNING nova.compute.resource_tracker [req-7c022b9b-7659-4a4d-9b53-30366a7fd150- Instance 50f71a07-306d-4d2c-8f4a-6eaa11fbd233 has been moved to another host bdc6 (bdc6). There are allocations remaining against the source host that might need to be removed: {upright resources: {upright VCPUBG: 8, upright MEMORY installed MBUs: 16384 InstanceNotFound_Remote: Instance 61223c2d-0b0c-4729-85e6-741c88e6e476 could not be found.2019-07-16 19 741c88e6e476 could not be found.2019 10 85e6 03.068 1192779 WARNING nova.compute.resource_tracker [req-7c022b9b-7659-4a4d-9b53-30366a7fd150 -] Instance 0e7c21d4-a5fb-4059-aa47-bad47700e827 has been moved to another host bdc1 (bdc1). There are allocations remaining against the source host that might need to be removed: {upright resources: {upright VCPUBG: 8, upright MEMORY installed MBUs: 16384 InstanceNotFound_Remote: Instance 61223c2d-0b0c-4729-85e6-741c88e6e476 could not be found.2019-07-16 19 741c88e6e476 could not be found.2019 10 85e6 03.069 1192779 INFO nova.compute.resource_tracker [req-7c022b9b-7659-4a4d-9b53-30366a7fd150 -] Final resource view: name=bdc2 phys_ram=131037MB used_ram=512MB phys_disk=115480GB used_disk=0GB total_vcpus=24 used_vcpus=0 pci_stats= []

Warning information indicates that several instance information on the current computing node conflicts with the information on other nodes. Check the Metabase.

MariaDB [nova_api] > select * from allocations where resource_provider_id=7 +- -+-+ | created_at | updated_at | id | resource_provider_id | consumer_id | resource_class_id | used | +- -- +-+ | 2019-07-09 09:10:27 | NULL | 1471 | 7 | 9d860729-597a-4420-bb8f-e9415587d808 | 0 | 4 | | 2019-07-08 14:58:36 | NULL | 1444 | 7 | 61223c2d-0b0c-4729-85e6-741c88e6e476 | 0 | 6 | 2019-07-09 10:09:33 | NULL | 1510 | 7 | e0d8d6df-4b48-402b-aa33-97c4a6166c5b | 0 | 6 | 2019-07-09 09:18:30 | NULL | 1477 | 7 | 1d59e7db-bf1b-478c-a6bd-10287365cb65 | 0 | 8 | | 2019-07-09 09:26:26 | NULL | 1483 | 7 | 6446a84d-cdfd-4cfe-bcd2-2d1d75db229f | 0 | 8 | | 2019-07-09 09:36:40 | NULL | 1486 | 7 | 0e7c21d4-a5fb-4059-aa47-bad47700e827 | 0 | 8 | 2019-07-09 09:46:02 | NULL | 1492 | 7 | 8e42328d-fd1c-4abc-acac-5c6e09623af6 | 0 | 8 | | 2019-07-09 10:02:57 | NULL | 1504 | 7 | 50f71a07-306d-4d2c-8f4a-6eaa11fbd233 | 0 | 8 | | 2019-07-09 09:10:27 | NULL | 1472 | 7 | 9d860729-597a-4420-bb8f-e9415587d808 | 1 | 8192 | | 2019-07-08 14:58:36 | NULL | 1445 | 7 | 61223c2d-0b0c-4729-85e6-741c88e6e476 | 1 | 12288 | | 2019-07- 09 10:09:33 | NULL | 1511 | 7 | e0d8d6df-4b48-402b-aa33-97c4a6166c5b | 1 | 12288 | 2019-07-09 09:18:30 | NULL | 1478 | 7 | 1d59e7db-bf1b-478c-a6bd-10287365cb65 | 1 | 16384 | | 2019-07-09 09:26:26 | NULL | 1484 | 7 | 6446a84d-cdfd-4cfe-bcd2-2d1d75db229f | 1 | 16384 | | 2019-07-09 09:36:40 | NULL | 1487 | 7 | 0e7c21d4-a5fb-4059-aa47-bad47700e827 | 1 | 16384 | | 2019-07-09 09:46:02 | NULL | 1493 | 7 | 8e42328d-fd1c-4abc-acac-5c6e09623af6 | 1 | 16384 | | 2019-07-09 10:02:57 | NULL | 1505 | 7 | 50f71a07-306d-4d2c-8f4a-6eaa11fbd233 | 1 | 16384 | | 2019-07-08 14:58:36 | NULL | 1446 | 7 | 61223c2d-0b0c-4729-85e6-741c88e6e476 | 2 | 50 | 2019-07-09 09 | : 10:27 | NULL | 1473 | 7 | 9d860729-597a-4420-bb8f-e9415587d808 | 2 | 50 | | 2019-07-09 09:18:30 | NULL | 1479 | 7 | 1d59e7db-bf1b-478c-a6bd-10287365cb65 | 2 | 50 | | 2019-07-09 09:26:26 | NULL | 1485 | 7 | 6446a84d-cdfd-4cfe-bcd2-2d1d75db229f | 2 | 50 | | 2019-07-09 09:36:40 | NULL | 1488 | 7 | 0e7c21d4-a5fb-4059-aa47-bad47700e827 | 2 | 50 | | 2019-07-09 09:46:02 | NULL | 1494 | 7 | 8e42328d-fd1c-4abc-acac-5c6e09623af6 | | 2 | 50 | | 2019-07-09 10:02:57 | NULL | 1506 | 7 | 50f71a07-306d-4d2c-8f4a-6eaa11fbd233 | 2 | 50 | 2019-07-09 10:09:33 | NULL | 1512 | 7 | e0d8d6df-4b48-402b-aa33-97c4a6166c5b | 2 | 50 | +-- | -+- -+-+ 24 rows in set (0.00 sec) MariaDB [nova_api] > select * from allocations where consumer_id='9d860729-597aMel 4420Mub 8fMel e9415587d808' +- -+-+ | created_at | updated_at | id | resource_provider_id | consumer_id | resource_class_id | used | +- -+ | 2019-07-09 09:10:27 | NULL | 1468 | 6 | 9d860729-597a-4420-bb8f-e9415587d808 | 0 | 4 | | 2019-07-09 09:10:27 | NULL | 1469 | 6 | 9d860729-597a-4420-bb8f-e9415587d808 | 1 | 8192 | | 2019-07-09 09:10:27 | NULL | 1470 | 6 | 9d860729-597a-4420-bb8f-e9415587d808 | 2 | 50 | | 2019-07-09 09:10:27 | NULL | 1471 | | 7 | 9d860729-597a-4420-bb8f-e9415587d808 | 0 | 4 | | 2019-07-09 09:10:27 | NULL | 1472 | 7 | 9d860729-597a-4420-bb8f-e9415587d808 | 1 | 8192 | | 2019-07-09 09:10:27 | NULL | 1473 | 7 | 9d860729-597a-4420 | -bb8f-e9415587d808 | 2 | 50 | +-- -+

You can see that the resource_provider_id is 7, that is, there is resource allocation information on the new bdc2. Pick one of the instances id and find that it is not only on 7 but also on 6. It is associated with the instance evacuation on the old node, and the new node changes the uuid and inherits the information of the old node, so the log will report a conflict Warining. And (8192 "12288" 12288 "16384" 16384 "16384" 16384 "16384) / 1024" 112G

Memory resources account for 112 gigabytes, which is about to reach the memory limit of the current machine, so creating a CVM will not give priority to this node, and migration can only migrate small ones.

Now that the metadata has been modified, go to black and continue to clean it up.

MariaDB [nova_api] > delete from allocations where resource_provider_id=7;Query OK, 24 rows affected (0.00 sec) MariaDB [nova_api] > select * from allocations where resource_provider_id=7;Empty set (0.00 sec) 4, verification

To create three virtual machines that require large resources, it is found that they are all created on bdc2, and there is no similar Warning in the nova-compute log, indicating that the problem is solved.

5. Remarks

Metadata table with data for OpenStack basic components

MariaDB [nova_api] > SELECT table_name,table_rows FROM information_schema.tables WHERE TABLE_SCHEMA = 'nova' and table_rows0 ORDER BY table_rows DESC +-- +-+ | table_name | table_rows | +-+-+ | instance_actions_events | 2335 | | instance_system_metadata | 2324 | | instance_ Actions | 1959 | virtual_interfaces | 474 | block_device_mapping | 451 | instance_info_caches | 267 | instances | instance_id_mappings | 267 | instance_faults | 217 | instance_extra | instance_extra | migrations | 122 | | s3_images | | 17 | | services | 13 | compute_nodes | 8 | | security_groups | 6 | +-+-+ 15 rows in set (0.00 sec) MariaDB [nova_api] > SELECT table_name | Table_rows FROM information_schema.tables WHERE TABLE_SCHEMA = 'nova_api' and table_rows0 ORDER BY table_rows DESC +-+-+ | table_name | table_rows | +-+-+ | consumers | 339 | | instance_mappings | 283 | | request_specs | 213 | allocations | | | traits | 164 | | quotas | 57 | | inventories | 23 | flavors | 9 | projects | 8 | key_pairs | 8 | users | 8 | resource_providers | 8 | host_mappings | 8 | cell_mappings | | | 2 | +-+-+ 14 rows in set (0.01sec) MariaDB [nova_api] > SELECT table_name | Table_rows FROM information_schema.tables WHERE TABLE_SCHEMA = 'nova_cell0' and table_rows0 ORDER BY table_rows DESC +-- +-+ | table_name | table_rows | +-+-+ | instance_system_metadata | 112 | | instance_id_mappings | 16 | | block_device_mapping | 16 | instance_faults | 16 | instance_extra | 16 | instances | 16 | instance_info_caches | 16 | s3_images | 2 | + -+ MariaDB [nova_api] > SELECT table_name Table_rows FROM information_schema.tables WHERE TABLE_SCHEMA = 'cinder' and table_rows0 ORDER BY table_rows DESC +-+-+ | table_name | table_rows | +-+-+ | reservations | 573 | | volume_admin_metadata | 478 | | volume_attachment | 451 | | volumes | 200 | | volume_glance_metadata | 64 | quotas | 21 | quota_usages | 15 | quota_classes | 6 | services | 2 | workers | 1 | +- -+-+ 10 rows in set (0.07sec) MariaDB [nova_api] > SELECT table_name Table_rows FROM information_schema.tables WHERE TABLE_SCHEMA = 'glance' and table_rows0 ORDER BY table_rows DESC +-+-+ | table_name | table_rows | +-+-+ | images | 19 | image_locations | 19 | image_properties | 9 | | alembic_version | 1 | +- -+-+ 4 rows in set (0.03 sec) MariaDB [nova_api] > SELECT table_name Table_rows FROM information_schema.tables WHERE TABLE_SCHEMA = 'keystone' and table_rows0 ORDER BY table_rows DESC +-+-+ | table_name | table_rows | +-+-+ | endpoint | 18 | assignment | 17 | user | 14 | password | 14 | local_ User | 14 | project | 12 | service | 6 | migrate_version | 4 | role | 2 | +-+-+ 9 rows in set (0.08 sec) MariaDB [nova_api] > SELECT table_name Table_rows FROM information_schema.tables WHERE TABLE_SCHEMA = 'neutron' and table_rows0 ORDER BY table_rows DESC +-- +-+ | table_name | table_rows | +-+-+ | ml2_vxlan_allocations | 1000 | | standardattributes | 149 | | ports | 69 | ipamallocations | 69 | ipallocations | 69 | ml2_port_bindings | 69 | portsecuritybindings | 69 | securitygroupportbindings | 68 | ml2_port_binding_levels | 66 | securitygrouprules | 59 | | quotas | 56 | | quotausages | 17 | agents | 11 | default_security_group | 8 | segmenthostmappings | 8 | securitygroups | 8 | allowedaddresspairs | 4 | provisioningblocks | 4 | alembic_version | 2 | +- -+-+ 19 rows in set (0.18 sec) 6, Summary 1. Metadata operations are very dangerous. Try not to move or move less. If you want to move, back up the database first. 2. Delete CVMs and volumes that cannot be deleted, and do not modify the deleted field of metadata directly. This is a way to deceive yourself and others, but you can't see it on the dashboard. The actual resources are not released and there are files in the backend storage. 3, hardware operation should be careful, in short, any dangerous operation should be confirmed again and again. This is the end of the article on "how to report an error Failed to create resource provider on a new node in OpenStack". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it out for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report