Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to create a virtual machine with nova

2025-01-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

Editor to share with you how to create a virtual machine nova, I believe that most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!

General description:

1. Create an instance interface

Or take a look at the interface API first.

REQ: curl\-I 'http://ubuntu80:8774/v2/0e962df9db3f4469b3d9bfbc5ffdaf7e/servers'\-X POST-H "Accept: application/json"\-H "Content-Type: application/json"\-H "User-Agent: python-novaclient"\-H "X-Auth-Project-Id: admin"\-H "X-Auth-Token: {SHA1} e87219521f61238b143fbb323b962930380ce022"\-d' {"server": {"name": "ubuntu_test" "imageRef": "cde1d850-65bb-48f6-8ee9-b990c7ccf158", "flavorRef": "2", "max_count": 1, "min_count": 1, "networks": [{"uuid": "cfa25cef-96c3-46f1-8522-d9518eb5a451"}]}'

The corresponding here is still Controller.

The specific location is:

Nova.api.openstack.compute.servers.Controller.create

Note that the decorator for this method has a @ wsgi.response (202), and a return of 202 according to the HTTP protocol status code indicates that the server has accepted the request, but has not yet processed it, indicating that this is an asynchronous task.

Finally, the self.compute_api.create (...) called by this method Is made up of _ _ init__ (...) Get self.compute_api = compute.API () in.

So compute.API () corresponds to nova.compute.api.API.create (...), and nova.compute.api.API._create_instance (...) is called internally.

In nova.compute.api.API._create_instance (...) Here's the point.

two。 Task status changes to SCHEDULING for the first time

In nova.compute.api.API._create_instance (...) There is a step called:

Instances = self._provision_instances (context, instance_type, min_count, max_count, base_options, boot_meta, security_groups, block_device_mapping, shutdown_terminate, instance_group, check_server_group_quota)

The location of this method is nova.compute.api.API._provision_instances with the following internal calls:

Instance = self.create_db_entry_for_new_instance (...)

In the nova.compute.api.API.create_db_entry_for_new_instance corresponding (self.create_db_entry_for_new_instance (...)), there is the following call:

Self._populate_instance_for_create (context, instance, image, index, security_group, instance_type)

It corresponds to nova.compute.api.API._populate_instance_for_create, which internally sets the task status to scheduling for the first time:

Instance.vm_state = vm_states.BUILDINGinstance.task_state = task_states.SCHEDULING

So go back to the _ provision_instances method, mainly applying for a quota.

3. From nova-api to nova-conductor

In nova.compute.api.API._create_instance (...) There is a step called:

Self.compute_task_api.build_instances (context, instances=instances, image=boot_meta, filter_properties=filter_properties, admin_password=admin_password, injected_files=injected_files, requested_networks=requested_networks, security_groups=security_groups, block_device_mapping=block_device_mapping, legacy_bdm=False)

Leaving nova-api,nova-api from this step invokes the methods in nova-conductor,nova-scheduler and nova-compute.

Property def compute_task_api (self): if self._compute_task_api is None: # TODO (alaski): Remove calls into here from conductor manager so # that this isn't necessary. # 1180540 from nova import conductor self._compute_task_api = conductor.ComputeTaskAPI () return self._compute_task_api

4. Nova-conductor calls nova-scheduler and nova-compute

This side has come to the conductor section, which is located at nova.conductor.ComputeTaskAPI:

Def ComputeTaskAPI (* args, * * kwargs): use_local = kwargs.pop ('use_local', False) if oslo.config.cfg.CONF.conductor.use_local or use_local: api = conductor_api.LocalComputeTaskAPI else: api = conductor_api.ComputeTaskAPI return api (* args, * * kwargs)

Here use_local is set to False by default, and the default call here is

Api = conductor_api.LocalComputeTaskAPI

Its location is that nova.conductor.LocalComputeTaskAPI has manager.ComputeTaskManager or nova.conductor.ComputeTaskManager in its constructor (_ _ init__ (...)).

The build_instances method of this class, location (nova.conductor.ComputeTaskManager.build_instances (...)):

Nova-conductor generates an request_spec dictionary in build_instances ()

Request_spec = scheduler_utils.build_request_spec (...)

It includes detailed virtual machine information, based on which nova-scheduler selects the best host for the virtual machine.

Hosts = self.scheduler_client.select_destiation (..., request_spec,...)

Then nova-conductor calls nova-compute through RPC to create a virtual machine.

Self.compute_rpcapi.build_and_run_instance (context, instance=instance, host=host ['host'], image=image, request_spec=request_spec, filter_properties=local_filter_props, admin_password=admin_password, injected_files=injected_files, requested_networks=requested_networks Security_groups=security_groups, block_device_mapping=bdms, node=host ['nodename'], limits=host [' limits'])

Nova.compute.rpcapi.ComputeAPI.build_and_run_instance is called here.

As you can see, the call is' build_and_run_instance', and cctxt.cast (...). Is an asynchronous remote call (does not return immediately after the call). You can search for the use of the oslo.messaging module in detail.

Cctxt.cast (ctxt, 'build_and_run_instance', instance=instance, image=image, request_spec=request_spec, filter_properties=filter_properties, admin_password=admin_password, injected_files=injected_files, requested_networks=requested_networks, security_groups=security_groups, block_device_mapping=block_device_mapping, node=node Limits=limits)

The call corresponds to nova.compute.manager.build_and_run_instance (...), and the internal call (note: it is called in spawn mode) is _ do_build_and_run_instance (...).

_ do_build_and_run_instance (...) The main internal call is the _ build_and_run_instance function (nova.compute.manager._build_and_run_instance (...)):

5. Create and run an instance

Navigate to nova.compute.manager._build_and_run_instance (...) After that, you see the following code:

Def _ build_and_run_instance (self, context, instance, image, injected_files, admin_password, requested_networks, security_groups, block_device_mapping, node, limits, filter_properties): image_name = image.get ('name') self._notify_about_instance_usage (context, instance,' create.start' Extra_usage_info= {'image_name': image_name}) try: # Resource tracker rt = self._get_resource_tracker (node) with rt.instance_claim (context, instance, limits) as inst_claim: # NOTE (russellb) It's important that this validation be done # * after* the resource tracker instance claim As that is where # the host is set on the instance. Self._validate_instance_group_policy (context, instance, filter_properties) # allocates resources, including network and storage Internally, the task status changed from task_states.SPAWNING to task_states.NETWORKING and then # to task_states.BLOCK_DEVICE_MAPPING with self._build_resources (context, instance, requested_networks, security_groups, image) Block_device_mapping) as resources: instance.vm_state = vm_states.BUILDING # Task status becomes incubating instance.task_state = task_states.SPAWNING instance.numa_topology = inst_claim.claimed_numa_topology instance.save (expected_task_state= task_states.BLOCK_DEVICE_MAPPING) block_device_info = resources ['block_device_info'] network_info = resources [' network_info'] # call the underlying virt api incubation instance self.driver.spawn (context Instance, image, injected_files, admin_password, network_info=network_info, block_device_info=block_device_info) except.: # NOTE (alaski): This is only useful during reschedules, remove it now. Instance.system_metadata.pop ('network_allocated', None) # View the power status of the instance instance.power_state = self._get_power_state (context Instance) # Power on the instance (boot) instance.vm_state = vm_states.ACTIVE # Task status clear instance.task_state = None # instance internal time operation instance.launched_at = timeutils.utcnow () try: instance.save (expected_task_state=task_states.SPAWNING) except (exception.InstanceNotFound Exception.UnexpectedDeletingTaskStateError) as e: with excutils.save_and_reraise_exception (): self._notify_about_instance_usage (context, instance, 'create.end', fault=e) # notify the end of the creation process self._notify_about_instance_usage (context, instance,' create.end') Extra_usage_info= {'message': _ (' Success')}, network_info=network_info)

The first step is to set up a resource tracker (RT: Resource Tracker). Note that RT is divided into a request tracker (Claim RT) and a cycle tracker (Periodic RT). Of course, we can also call our own extension tracker (Extensible RT).

As the name implies, the RT established in the _ build_and_run_instance function is a claim tracker that verifies the resources on the computing node and throws an exception if the resource allocation fails.

Rt = self._get_resource_tracker (node) with rt.instance_claim (context, instance, limits) as inst_claim:

Also mentioned here is the _ build_resources function, in which the instance.task_state state

From task_states.SCHEDULING to task_states.NETWORKING and then to task_states.BLOCK_DEVICE_MAPPING

Self._build_resources (context, instance, requested_networks, security_groups, image, block_device_mapping)

When the resource allocation is completed, the task status changes from task_states.BLOCK_DEVICE_MAPPING to task_states.SPAWNING.

Instance.task_state = task_states.SPAWNING

When everything is ready, call self.driver.spawn to incubate the instance, and the bottom layer is the Libvirt part to start the incubation process.

Self.driver.spawn (context, instance, image, injected_files, admin_password, network_info=network_info, block_device_info=block_device_info)

After that, all you have to do is turn it on, at the right time, notify the end of the creation process, successful!

The above is all the contents of the article "how to create a virtual machine in nova". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report