In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces the example analysis of OpenStack Nova scheduling strategy, the article is very detailed, has a certain reference value, interested friends must read it!
Overview
When creating a new virtual machine instance, Nova Scheduler filters (filtering) and weighting all compute nodes through the configured Filter Scheduler, and finally returns the list of available hosts according to the weighing level and the number of nodes requested by the user. If it fails, there are no hosts available.
# Standard filter
AllHostsFilter-without filtering, all visible hosts will pass.
ImagePropertiesFilter-filter based on mirror metadata.
AvailabilityZoneFilter-filter by available zone (AvailabilityZone metadata).
ComputeCapabilitiesFilter-filter based on computing power to determine whether it is passed by matching the parameters specified when the virtual machine is created with the attributes and status of the host. The available operators are as follows:
* = (equal to or greater than as a number Same as vcpus case) * = (equal to as a number) *! = (not equal to as a number) * > = (greater than or equal to as a number) * = (greater than or equal to as a string) * s > (greater than as a string) * s 1: if scope [0]! = _ SCOPE: continue else: del scope [0] Key = scope [0] aggregate_vals = metadata.get (key None) if not aggregate_vals: LOG.debug ("% (host_state) s fails instance_type extra_specs"requirements. Extra_spec% (key) s is not in aggregate. ", {'host_state': host_state,' key': key}) return False for aggregate_val in aggregate_vals: if extra_specs_ops.match (aggregate_val) Req): break else: LOG.debug ("% (host_state) s fails instance_type extra_specs"requirements. '% (aggregate_vals) s' do not "" match'% (req) s'", {'host_state': host_state,' req': req, 'aggregate_vals': aggregate_vals}) return False return True
Related operators (nova/scheduler/filters/extra_specs_ops.py) for metadata matching:
Import operator# 1. The following operations are supported:# =, slots, slots, s > =, s >, s = float (y),'= 2.6 'img_hv_requested_version': fields.VersionPredicateField (), # type of the hypervisor, eg kvm, ironic, xen' img_hv_type': fields.HVTypeField (), # Whether the image needs/expected config drive' img_config_drive': fields.ConfigDrivePolicyField () # boolean flag to set space-saving or performance behavior on the # Datastore 'img_linked_clone': fields.FlexibleBooleanField (), # Image mappings-related to Block device mapping data-mapping # of virtual image names to device names. This could be represented # as a formal data type, but is left as dict for same reason as # img_block_device_mapping field. It would arguably make sense for # the two to be combined into a single field and data type in the # future. 'img_mappings': fields.ListOfDictOfNullableStringsField (), # image project id (set on upload)' img_owner_id': fields.StringField (), # root device name, used in snapshotting eg / dev/ 'img_root_device_name': fields.StringField (), # boolean-if false don't talk to nova agent' img_use_agent': fields.FlexibleBooleanField () # integer value 1 'img_version': fields.IntegerField (), # base64 of encoding of image signature' img_signature': fields.StringField (), # string indicating hash method used to compute image signature' img_signature_hash_method': fields.ImageSignatureHashTypeField (), # string indicating Castellan uuid of certificate # used to compute the image's signature' img_signature_certificate_uuid': fields.UUIDField () # string indicating type of key used to compute image signature 'img_signature_key_type': fields.ImageSignatureKeyTypeField (), # string of username with admin privileges' os_admin_user': fields.StringField (), # string of boot time command line arguments for the guest kernel 'os_command_line': fields.StringField (), # the name of the specific guest operating system distro. This # is not done as an Enum since the list of operating systems is # growing incredibly fast, and valid values can be arbitrarily # user defined. Nova has no real need for strict validation so # leave it freeform 'os_distro': fields.StringField (), # boolean-if true, then guest must support disk quiesce # or snapshot operation will be denied' os_require_quiesce': fields.FlexibleBooleanField (), # Secure Boot feature will be enabled by setting the "os_secure_boot" # image property to "required". Other options can be: "disabled" or # "optional" # "os:secure_boot" flavor extra spec value overrides the image property # value. 'os_secure_boot': fields.SecureBootField (), # boolean-if using agent don't inject files, assume someone else is # doing that (cloud-init)' os_skip_agent_inject_files_at_boot': fields.FlexibleBooleanField (), # boolean-if using agent don't try inject ssh key, assume someone # else is doing that (cloud-init) 'os_skip_agent_inject_ssh': fields.FlexibleBooleanField () # The guest operating system family such as' linux', 'windows'-this # is a fairly generic type. For a detailed type consider os_distro # instead 'os_type': fields.OSTypeField (),} # The keys are the legacy property names and # the values are the current preferred names _ legacy_property_map = {' architecture': 'hw_architecture',' owner_id': 'img_owner_id',' vmware_disktype': 'hw_disk_type' 'vmware_image_version': 'img_version',' vmware_ostype': 'os_distro',' auto_disk_config': 'hw_auto_disk_config',' ipxe_boot': 'hw_ipxe_boot',' xenapi_device_id': 'hw_device_id',' xenapi_image_compression_level': 'img_compression_level' 'vmware_linked_clone': 'img_linked_clone',' xenapi_use_agent': 'img_use_agent',' xenapi_skip_agent_inject_ssh': 'os_skip_agent_inject_ssh',' xenapi_skip_agent_inject_files_at_boot': 'os_skip_agent_inject_files_at_boot' 'cache_in_nova': 'img_cache_in_nova',' vm_mode': 'hw_vm_mode',' bittorrent': 'img_bittorrent',' mappings': 'img_mappings',' block_device_mapping': 'img_block_device_mapping',' bdm_v2': 'img_bdm_v2' 'root_device_name': 'img_root_device_name',' hypervisor_version_requires': 'img_hv_requested_version',' hypervisor_type': 'img_hv_type',}
It mainly compares the attributes such as image and host architecture (nova/scheduler/filters/image_props_filter.py):
Class ImagePropertiesFilter (filters.BaseHostFilter): "" Filter compute nodes that satisfy instance image properties. The ImagePropertiesFilter filters compute nodes that satisfy any architecture, hypervisor type, or virtual machine mode properties specified on the instance's image properties. Image properties are contained in the image dictionary in the request_spec. "" RUN_ON_REBUILD = True # Image Properties and Compute Capabilities do not change within # a request run_filter_once_per_request = True def _ instance_supported (self, host_state, image_props Hypervisor_version): img_arch = image_props.get ('hw_architecture') img_h_type = image_props.get (' img_hv_type') img_vm_mode = image_props.get ('hw_vm_mode') checked_img_props = (fields.Architecture.canonicalize (img_arch)) Fields.HVType.canonicalize (img_h_type), fields.VMMode.canonicalize (img_vm_mode)) # Supported if no compute-related instance properties are specified if not any (checked_img_props): return True supp_instances = host_state.supported_instances # Not supported if an instance property is requested but nothing # advertised by the host. If not supp_instances: LOG.debug ("Instance contains properties% (image_props) s,"but no corresponding supported_instances are"advertised by the compute node", {'image_props': image_props}) return False def _ compare_props (props) Other_props): for i in props: if i and i not in other_props: return False return True def _ compare_product_version (hyper_version Image_props): version_required = image_props.get ('img_hv_requested_version') if not (hypervisor_version and version_required): return True img_prop_predicate = versionpredicate.VersionPredicate (' image_prop (% s)'% version_required) hyper_ver_str = versionutils.convert_version_to_str (hyper) _ version) return img_prop_predicate.satisfied_by (hyper_ver_str) for supp_inst in supp_instances: if _ compare_props (checked_img_props Supp_inst): if _ compare_product_version (hypervisor_version, image_props): return True LOG.debug ("Instance contains properties% (image_props) s"that are not provided by the compute node"supported_instances% (supp_instances) sor"hypervisor version% (hypervisor_version) s do not match" {'image_props': image_props,' supp_instances': supp_instances, 'hypervisor_version': hypervisor_version}) return False def host_passes (self, host_state, spec_obj): "" Check if host passes specified image properties. Returns True for compute nodes that satisfy image properties contained in the request_spec. " Image_props = spec_obj.image.properties if spec_obj.image else {} if not self._instance_supported (host_state, image_props, host_state.hypervisor_version): LOG.debug ("% (host_state) s does not support requested"instance_properties" {'host_state': host_state}) return False return True
# CoreFilter filter
CoreFilter and AggregateCoreFilter filter source code (nova/scheduler/filters/core_filter.py):
Class BaseCoreFilter (filters.BaseHostFilter): RUN_ON_REBUILD = False def _ get_cpu_allocation_ratio (self, host_state, spec_obj): raise NotImplementedError def host_passes (self, host_state, spec_obj): "" Return True if host has sufficient CPU cores. : param host_state: nova.scheduler.host_manager.HostState: param spec_obj: filter options: return: boolean "if not host_state.vcpus_total: # Fail safe LOG.warning (_ LW (" VCPUs not set " Assuming CPU collection broken ") return True instance_vcpus = spec_obj.vcpus cpu_allocation_ratio = self._get_cpu_allocation_ratio (host_state Spec_obj) vcpus_total = host_state.vcpus_total * cpu_allocation_ratio # Only provide a VCPU limit to compute if the virt driver is reporting # an accurate count of installed VCPUs. (XenServer driver does not) if vcpus_total > 0: host_state.limits ['vcpu'] = vcpus_total # Do not allow an instance to overcommit against itself, only # against other instances If instance_vcpus > host_state.vcpus_total: LOG.debug ("% (host_state) s does not have% (instance_vcpus) d"total cpus before overcommit, it only has% (cpus) d", {'host_state': host_state,' instance_vcpus': instance_vcpus 'cpus': host_state.vcpus_total}) return False free_vcpus = vcpus_total-host_state.vcpus_used if free_vcpus
< instance_vcpus: LOG.debug("%(host_state)s does not have %(instance_vcpus)d " "usable vcpus, it only has %(free_vcpus)d usable " "vcpus", {'host_state': host_state, 'instance_vcpus': instance_vcpus, 'free_vcpus': free_vcpus}) return False return Trueclass CoreFilter(BaseCoreFilter): """CoreFilter filters based on CPU core utilization.""" def _get_cpu_allocation_ratio(self, host_state, spec_obj): return host_state.cpu_allocation_ratioclass AggregateCoreFilter(BaseCoreFilter): """AggregateCoreFilter with per-aggregate CPU subscription flag. Fall back to global cpu_allocation_ratio if no per-aggregate setting found. """ def _get_cpu_allocation_ratio(self, host_state, spec_obj): aggregate_vals = utils.aggregate_values_from_key( host_state, 'cpu_allocation_ratio') try: ratio = utils.validate_num_values( aggregate_vals, host_state.cpu_allocation_ratio, cast_to=float) except ValueError as e: LOG.warning(_LW("Could not decode cpu_allocation_ratio: '%s'"), e) ratio = host_state.cpu_allocation_ratio return ratio 验证参数(nova/scheduler/filters/utils.py): def validate_num_values(vals, default=None, cast_to=int, based_on=min): """Returns a correctly casted value based on a set of values. This method is useful to work with per-aggregate filters, It takes a set of values then return the 'based_on'{min/max} converted to 'cast_to' of the set or the default value. Note: The cast implies a possible ValueError """ num_values = len(vals) if num_values == 0: return default if num_values > < /proc/cpuinfoechoawk -F: '{ if ($1 ~ /processor/) { gsub(/ /,"",$2); p_id=$2; } else if ($1 ~ /physical id/){ gsub(/ /,"",$2); s_id=$2; arr[s_id]=arr[s_id] " " p_id }} END{ for (i in arr) printf "Socket %s:%s\n", i, arr[i];}' /proc/cpuinfoechoecho '===== CPU Info Summary ====='echonr_processor=`get_nr_processor`echo "Logical processors: $nr_processor"nr_socket=`get_nr_socket`echo "Physical socket: $nr_socket"nr_siblings=`get_nr_siblings`echo "Siblings in one socket: $nr_siblings"nr_cores=`get_nr_cores_of_socket`echo "Cores in one socket: $nr_cores"let nr_cores*=nr_socketecho "Cores in total: $nr_cores"if [ "$nr_cores" = "$nr_processor" ]; then echo "Hyper-Threading: off"else echo "Hyper-Threading: on"fiechoecho '===== END =====' 运行CPU结构分析脚本: ./cpu_view.sh ===== CPU Topology Table =====+--------------+---------+-----------+| Processor ID | Core ID | Socket ID |+--------------+---------+-----------+| 0 | 0 | 0 |+--------------+---------+-----------+| 1 | 1 | 0 |+--------------+---------+-----------+| 2 | 2 | 0 |+--------------+---------+-----------+| 3 | 3 | 0 |+--------------+---------+-----------+| 4 | 4 | 0 |+--------------+---------+-----------+| 5 | 8 | 0 |+--------------+---------+-----------+| 6 | 9 | 0 |+--------------+---------+-----------+| 7 | 10 | 0 |+--------------+---------+-----------+| 8 | 11 | 0 |+--------------+---------+-----------+| 9 | 16 | 0 |+--------------+---------+-----------+| 10 | 17 | 0 |+--------------+---------+-----------+| 11 | 18 | 0 |+--------------+---------+-----------+| 12 | 19 | 0 |+--------------+---------+-----------+| 13 | 20 | 0 |+--------------+---------+-----------+| 14 | 24 | 0 |+--------------+---------+-----------+| 15 | 25 | 0 |+--------------+---------+-----------+| 16 | 26 | 0 |+--------------+---------+-----------+| 17 | 27 | 0 |+--------------+---------+-----------+| 18 | 0 | 1 |+--------------+---------+-----------+| 19 | 1 | 1 |+--------------+---------+-----------+| 20 | 2 | 1 |+--------------+---------+-----------+| 21 | 3 | 1 |+--------------+---------+-----------+| 22 | 4 | 1 |+--------------+---------+-----------+| 23 | 8 | 1 |+--------------+---------+-----------+| 24 | 9 | 1 |+--------------+---------+-----------+| 25 | 10 | 1 |+--------------+---------+-----------+| 26 | 11 | 1 |+--------------+---------+-----------+| 27 | 16 | 1 |+--------------+---------+-----------+| 28 | 17 | 1 |+--------------+---------+-----------+| 29 | 18 | 1 |+--------------+---------+-----------+| 30 | 19 | 1 |+--------------+---------+-----------+| 31 | 20 | 1 |+--------------+---------+-----------+| 32 | 24 | 1 |+--------------+---------+-----------+| 33 | 25 | 1 |+--------------+---------+-----------+| 34 | 26 | 1 |+--------------+---------+-----------+| 35 | 27 | 1 |+--------------+---------+-----------+| 36 | 0 | 0 |+--------------+---------+-----------+| 37 | 1 | 0 |+--------------+---------+-----------+| 38 | 2 | 0 |+--------------+---------+-----------+| 39 | 3 | 0 |+--------------+---------+-----------+| 40 | 4 | 0 |+--------------+---------+-----------+| 41 | 8 | 0 |+--------------+---------+-----------+| 42 | 9 | 0 |+--------------+---------+-----------+| 43 | 10 | 0 |+--------------+---------+-----------+| 44 | 11 | 0 |+--------------+---------+-----------+| 45 | 16 | 0 |+--------------+---------+-----------+| 46 | 17 | 0 |+--------------+---------+-----------+| 47 | 18 | 0 |+--------------+---------+-----------+| 48 | 19 | 0 |+--------------+---------+-----------+| 49 | 20 | 0 |+--------------+---------+-----------+| 50 | 24 | 0 |+--------------+---------+-----------+| 51 | 25 | 0 |+--------------+---------+-----------+| 52 | 26 | 0 |+--------------+---------+-----------+| 53 | 27 | 0 |+--------------+---------+-----------+| 54 | 0 | 1 |+--------------+---------+-----------+| 55 | 1 | 1 |+--------------+---------+-----------+| 56 | 2 | 1 |+--------------+---------+-----------+| 57 | 3 | 1 |+--------------+---------+-----------+| 58 | 4 | 1 |+--------------+---------+-----------+| 59 | 8 | 1 |+--------------+---------+-----------+| 60 | 9 | 1 |+--------------+---------+-----------+| 61 | 10 | 1 |+--------------+---------+-----------+| 62 | 11 | 1 |+--------------+---------+-----------+| 63 | 16 | 1 |+--------------+---------+-----------+| 64 | 17 | 1 |+--------------+---------+-----------+| 65 | 18 | 1 |+--------------+---------+-----------+| 66 | 19 | 1 |+--------------+---------+-----------+| 67 | 20 | 1 |+--------------+---------+-----------+| 68 | 24 | 1 |+--------------+---------+-----------+| 69 | 25 | 1 |+--------------+---------+-----------+| 70 | 26 | 1 |+--------------+---------+-----------+| 71 | 27 | 1 |+--------------+---------+-----------+Socket 0: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53Socket 1: 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71===== CPU Info Summary =====Logical processors: 72Physical socket: 2Siblings in one socket: 36Cores in one socket: 18Cores in total: 36Hyper-Threading: on===== END =====绑定测试 ####创建普通虚拟机 创建一个用于NUMA测试的模板: $ openstack flavor create --vcpus 2 --ram 64 --disk 1 machine.numa 创建普通虚拟机: $ openstack server create --image cirros --flavor machine.numa --key-name mykey --nic net-id=8d01509e-4a3a-497a-9118-3827c1e37672 --availability-zone az01:osdev-01 server.numa1 查看libvirt配置: $ openstack server show server.numa1 | grep instance_name | awk '{print $4}'$ virsh edit instance-0000001f... 65536 65536 2 2048 ... ... 查看CPU亲和性和分配情况: $ ps -aux | grep `openstack server show server.numa1 | grep instance_name | awk '{print $4}'` | awk 'NR==1 {print $2}' | xargs taskset -c -ppid 180564's current affinity list: 0-71$ ps -aux | grep `openstack server show server.numa1 | grep instance_name | awk '{print $4}'` | awk 'NR==1 {print $2}' | xargs ps -m -o pid,psr,comm -p PID PSR COMMAND180564 - qemu-kvm - 61 - - 22 - - 13 - - 1 - - 58 - 设置模板的NUMA属性: $ nova flavor-key machine.numa set hw:numa_nodes=1 hw:numa_cpus.0=0,1 hw:numa_mem.0=64# nova flavor-key machine.numa unset hw:numa_nodes hw:numa_cpus.0 hw:numa_mem.0$ openstack flavor show machine.numa+----------------------------+-------------------------------------------------------------+| Field | Value |+----------------------------+-------------------------------------------------------------+| OS-FLV-DISABLED:disabled | False || OS-FLV-EXT-DATA:ephemeral | 0 || access_project_ids | None || disk | 1 || id | fc37ea6f-3e69-422f-a05e-0ee56837a84d || name | machine.numa || os-flavor-access:is_public | True || properties | hw:numa_cpus.0='0,1', hw:numa_mem.0='64', hw:numa_nodes='1' || ram | 64 || rxtx_factor | 1.0 || swap | || vcpus | 2 |+----------------------------+-------------------------------------------------------------+ 重新启动之前创建的虚拟机: $ openstack server stop server.numa1$ openstack server start server.numa1 其NUMA属性未改变: $ openstack server show server.numa1 | grep instance_name | awk '{print $4}'$ virsh edit instance-00000022... 65536 65536 2 2048 ... ... 查看CPU亲和性和分配情况: $ ps -aux | grep `openstack server show server.numa1 | grep instance_name | awk '{print $4}'` | awk 'NR==1 {print $2}' | xargs taskset -c -ppid 219152's current affinity list: 0-71$ ps -aux | grep `openstack server show server.numa1 | grep instance_name | awk '{print $4}'` | awk '{{if($11=="/usr/libexec/qemu-kvm") {print $2}}}' | xargs -I {} find /proc/{}/task/ -name "status" | xargs grep Cpus_allowed_list/proc/219152/task/219152/status:Cpus_allowed_list: 0-71/proc/219152/task/219220/status:Cpus_allowed_list: 0-71/proc/219152/task/219225/status:Cpus_allowed_list: 0-71/proc/219152/task/219227/status:Cpus_allowed_list: 0-71/proc/219152/task/219250/status:Cpus_allowed_list: 0-71$ ps -aux | grep `openstack server show server.numa1 | grep instance_name | awk '{print $4}'` | awk 'NR==1 {print $2}' | xargs ps -m -o pid,psr,comm -p PID PSR COMMAND219152 - qemu-kvm - 31 - - 64 - - 1 - - 12 - - 55 - 查看内存分配情况: $ ps -aux | grep `openstack server show server.numa1 | grep instance_name | awk '{print $4}'` | awk 'NR==1 {print $2}' | xargs -I {} cat /proc/{}/numa_maps...55cdcffcd000 default file=/usr/libexec/qemu-kvm mapped=1328 mapmax=2 N0=1287 N1=41 kernelpagesize_kB=455cdd0944000 default file=/usr/libexec/qemu-kvm anon=248 dirty=248 N1=248 kernelpagesize_kB=455cdd0afc000 default file=/usr/libexec/qemu-kvm anon=95 dirty=95 N0=5 N1=90 kernelpagesize_kB=455cdd0b5c000 default anon=19 dirty=19 N0=4 N1=15 kernelpagesize_kB=455cdd1d83000 default heap anon=10419 dirty=10419 N0=1158 N1=9261 kernelpagesize_kB=47f35cbbba000 default7f35cbbbb000 default anon=1 dirty=1 N0=1 kernelpagesize_kB=47f35cbcbb000 default7f35cbcbc000 default anon=1 dirty=1 N0=1 kernelpagesize_kB=47f35cedc2000 default7f35cedc3000 default anon=4 dirty=4 N0=4 kernelpagesize_kB=47f35d05c5000 default7f35d05c6000 default anon=1 dirty=1 N1=1 kernelpagesize_kB=47f35d06c6000 default... ####创建绑定虚拟机 创建一个设置NUMA参数的虚拟机,可以看到虚拟机的CPU都创建在Node 0上: $ openstack server create --image cirros --flavor machine.numa --key-name mykey --nic net-id=8d01509e-4a3a-497a-9118-3827c1e37672 --availability-zone az01:osdev-01 server.numa2 查看libvirt配置: $ openstack server show server.numa2 | grep instance_name | awk '{print $4}'instance-00000024$ virsh edit instance-00000024... 65536 65536 2 2048 ... ... 查看CPU亲和性和绑定情况: $ ps -aux | grep `openstack server show server.numa2 | grep instance_name | awk '{print $4}'` | awk '{{if($11=="/usr/libexec/qemu-kvm") {print $2}}}' | xargs taskset -c -ppid 1139's current affinity list: 0-17,36-53$ ps -aux | grep `openstack server show server.numa2 | grep instance_name | awk '{print $4}'` | awk '{{if($11=="/usr/libexec/qemu-kvm") {print $2}}}' | xargs -I {} find /proc/{}/task/ -name "status" | xargs grep Cpus_allowed_list/proc/1139/task/1139/status:Cpus_allowed_list: 0-17,36-53/proc/1139/task/1143/status:Cpus_allowed_list: 0-17,36-53/proc/1139/task/1148/status:Cpus_allowed_list: 0-17,36-53/proc/1139/task/1149/status:Cpus_allowed_list: 0-17,36-53/proc/1139/task/1151/status:Cpus_allowed_list: 0-17,36-53$ ps -aux | grep `openstack server show server.numa2 | grep instance_name | awk '{print $4}'` | awk '{{if($11=="/usr/libexec/qemu-kvm") {print $2}}}' | xargs ps -m -o pid,psr,comm -p PID PSR COMMAND 1139 - qemu-kvm - 3 - - 8 - - 2 - - 6 - - 51 - 查看内存分配情况: $ ps -aux | grep `openstack server show server.numa2 | grep instance_name | awk '{print $4}'` | awk '{{if($11=="/usr/libexec/qemu-kvm") {print $2}}}' | xargs -I {} cat /proc/{}/numa_maps...56133b0b3000 default file=/usr/libexec/qemu-kvm mapped=1326 mapmax=2 N0=1292 N1=34 kernelpagesize_kB=456133ba2a000 default file=/usr/libexec/qemu-kvm anon=248 dirty=248 N0=248 kernelpagesize_kB=456133bbe2000 default file=/usr/libexec/qemu-kvm anon=95 dirty=95 N0=95 kernelpagesize_kB=456133bc42000 default anon=19 dirty=19 N0=19 kernelpagesize_kB=456133db66000 default heap anon=3415 dirty=3415 N0=3415 kernelpagesize_kB=4... ####内存分配对比 内存分配对比测试脚本: #!/usr/bin/perl# Copyright (c) 2010, Jeremy Cole # This program is free software; you can redistribute it and/or modify it# under the terms of either: the GNU General Public License as published# by the Free Software Foundation; or the Artistic License.# # See http://dev.perl.org/licenses/ for more information.## This script expects a numa_maps file as input. It is normally run in# the following way:## # perl numa-maps-summary.pl < /proc/pid/numa_maps## Additionally, it can be used (of course) with saved numa_maps, and it# will also accept numa_maps output with "> < len(instance_topology): LOG.debug("There are not enough NUMA nodes on the system to schedule " "the instance correctly. Required: %(required)s, actual: " "%(actual)s", {'required': len(instance_topology), 'actual': len(host_topology)}) return # TODO(ndipanov): We may want to sort permutations differently # depending on whether we want packing/spreading over NUMA nodes for host_cell_perm in itertools.permutations( host_topology.cells, len(instance_topology)): cells = [] for host_cell, instance_cell in zip( host_cell_perm, instance_topology.cells): try: got_cell = _numa_fit_instance_cell( host_cell, instance_cell, limits) except exception.MemoryPageSizeNotSupported: # This exception will been raised if instance cell's # custom pagesize is not supported with host cell in # _numa_cell_supports_pagesize_request function. break if got_cell is None: break cells.append(got_cell) if len(cells) != len(host_cell_perm): continue if not pci_requests or ((pci_stats is not None) and pci_stats.support_requests(pci_requests, cells)): return objects.InstanceNUMATopology(cells=cells) ####虚拟化驱动 虚拟化驱动基类,创建虚拟实例函数spawn(nova/virt/driver.py): class ComputeDriver(object): """Base class for compute drivers. The interface to this class talks in terms of 'instances' (Amazon EC2 and internal Nova terminology), by which we mean 'running virtual machine' (XenAPI terminology) or domain (Xen or libvirt terminology). An instance has an ID, which is the identifier chosen by Nova to represent the instance further up the stack. This is unfortunately also called a 'name' elsewhere. As far as this layer is concerned, 'instance ID' and 'instance name' are synonyms. Note that the instance ID or name is not human-readable or customer-controlled -- it's an internal ID chosen by Nova. At the nova.virt layer, instances do not have human-readable names at all -- such things are only known higher up the stack. Most virtualization platforms will also have their own identity schemes, to uniquely identify a VM or domain. These IDs must stay internal to the platform-specific layer, and never escape the connection interface. The platform-specific layer is responsible for keeping track of which instance ID maps to which platform-specific ID, and vice versa. Some methods here take an instance of nova.compute.service.Instance. This is the data structure used by nova.compute to store details regarding an instance, and pass them into this layer. This layer is responsible for translating that generic data structure into terms that are specific to the virtualization platform. """ def spawn(self, context, instance, image_meta, injected_files, admin_password, network_info=None, block_device_info=None): """Create a new instance/VM/domain on the virtualization platform. Once this successfully completes, the instance should be running (power_state.RUNNING). If this fails, any partial instance should be completely cleaned up, and the virtualization platform should be in the state that it was before this call began. :param context: security context :param instance: nova.objects.instance.Instance This function should use the data there to guide the creation of the new instance. :param nova.objects.ImageMeta image_meta: The metadata of the image of the instance. :param injected_files: User files to inject into instance. :param admin_password: Administrator password to set in instance. :param network_info: instance network information :param block_device_info: Information about block devices to be attached to the instance. """ raise NotImplementedError() 虚拟机实例的主要属性(nova/objects/instance.py): # TODO(berrange): Remove NovaObjectDictCompat@base.NovaObjectRegistry.registerclass Instance(base.NovaPersistentObject, base.NovaObject, base.NovaObjectDictCompat): # Version 2.0: Initial version # Version 2.1: Added services # Version 2.2: Added keypairs # Version 2.3: Added device_metadata VERSION = '2.3' fields = { 'id': fields.IntegerField(), 'user_id': fields.StringField(nullable=True), 'project_id': fields.StringField(nullable=True), 'image_ref': fields.StringField(nullable=True), 'kernel_id': fields.StringField(nullable=True), 'ramdisk_id': fields.StringField(nullable=True), 'hostname': fields.StringField(nullable=True), 'launch_index': fields.IntegerField(nullable=True), 'key_name': fields.StringField(nullable=True), 'key_data': fields.StringField(nullable=True), 'power_state': fields.IntegerField(nullable=True), 'vm_state': fields.StringField(nullable=True), 'task_state': fields.StringField(nullable=True), 'services': fields.ObjectField('ServiceList'), 'memory_mb': fields.IntegerField(nullable=True), 'vcpus': fields.IntegerField(nullable=True), 'root_gb': fields.IntegerField(nullable=True), 'ephemeral_gb': fields.IntegerField(nullable=True), 'ephemeral_key_uuid': fields.UUIDField(nullable=True), 'host': fields.StringField(nullable=True), 'node': fields.StringField(nullable=True), 'instance_type_id': fields.IntegerField(nullable=True), 'user_data': fields.StringField(nullable=True), 'reservation_id': fields.StringField(nullable=True), 'launched_at': fields.DateTimeField(nullable=True), 'terminated_at': fields.DateTimeField(nullable=True), 'availability_zone': fields.StringField(nullable=True), 'display_name': fields.StringField(nullable=True), 'display_description': fields.StringField(nullable=True), 'launched_on': fields.StringField(nullable=True), # NOTE(jdillaman): locked deprecated in favor of locked_by, # to be removed in Icehouse 'locked': fields.BooleanField(default=False), 'locked_by': fields.StringField(nullable=True), 'os_type': fields.StringField(nullable=True), 'architecture': fields.StringField(nullable=True), 'vm_mode': fields.StringField(nullable=True), 'uuid': fields.UUIDField(), 'root_device_name': fields.StringField(nullable=True), 'default_ephemeral_device': fields.StringField(nullable=True), 'default_swap_device': fields.StringField(nullable=True), 'config_drive': fields.StringField(nullable=True), 'access_ip_v4': fields.IPV4AddressField(nullable=True), 'access_ip_v6': fields.IPV6AddressField(nullable=True), 'auto_disk_config': fields.BooleanField(default=False), 'progress': fields.IntegerField(nullable=True), 'shutdown_terminate': fields.BooleanField(default=False), 'disable_terminate': fields.BooleanField(default=False), 'cell_name': fields.StringField(nullable=True), 'metadata': fields.DictOfStringsField(), 'system_metadata': fields.DictOfNullableStringsField(), 'info_cache': fields.ObjectField('InstanceInfoCache', nullable=True), 'security_groups': fields.ObjectField('SecurityGroupList'), 'fault': fields.ObjectField('InstanceFault', nullable=True), 'cleaned': fields.BooleanField(default=False), 'pci_devices': fields.ObjectField('PciDeviceList', nullable=True), 'numa_topology': fields.ObjectField('InstanceNUMATopology', nullable=True), 'pci_requests': fields.ObjectField('InstancePCIRequests', nullable=True), 'device_metadata': fields.ObjectField('InstanceDeviceMetadata', nullable=True), 'tags': fields.ObjectField('TagList'), 'flavor': fields.ObjectField('Flavor'), 'old_flavor': fields.ObjectField('Flavor', nullable=True), 'new_flavor': fields.ObjectField('Flavor', nullable=True), 'vcpu_model': fields.ObjectField('VirtCPUModel', nullable=True), 'ec2_ids': fields.ObjectField('EC2Ids'), 'migration_context': fields.ObjectField('MigrationContext', nullable=True), 'keypairs': fields.ObjectField('KeyPairList'), } obj_extra_fields = ['name'] libvirt虚拟化驱动的spawn实现(nova/virt/libvirt/driver.py ): class LibvirtDriver(driver.ComputeDriver): # NOTE(ilyaalekseyev): Implementation like in multinics # for xenapi(tr3buchet) def spawn(self, context, instance, image_meta, injected_files, admin_password, network_info=None, block_device_info=None): disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance, image_meta, block_device_info) injection_info = InjectionInfo(network_info=network_info, files=injected_files, admin_pass=admin_password) gen_confdrive = functools.partial(self._create_configdrive, context, instance, injection_info) self._create_image(context, instance, disk_info['mapping'], injection_info=injection_info, block_device_info=block_device_info) # Required by Quobyte CI self._ensure_console_log_for_instance(instance) xml = self._get_guest_xml(context, instance, network_info, disk_info, image_meta, block_device_info=block_device_info) self._create_domain_and_network( context, xml, instance, network_info, disk_info, block_device_info=block_device_info, post_xml_callback=gen_confdrive, destroy_disks_on_failure=True) LOG.debug("Instance is running", instance=instance) def _wait_for_boot(): """Called at an interval until the VM is running.""" state = self.get_info(instance).state if state == power_state.RUNNING: LOG.info(_LI("Instance spawned successfully."), instance=instance) raise loopingcall.LoopingCallDone() timer = loopingcall.FixedIntervalLoopingCall(_wait_for_boot) timer.start(interval=0.5).wait() 生成虚拟机xml配置(nova/virt/libvirt/driver.py ): def _get_guest_xml(self, context, instance, network_info, disk_info, image_meta, rescue=None, block_device_info=None): # NOTE(danms): Stringifying a NetworkInfo will take a lock. Do # this ahead of time so that we don't acquire it while also # holding the logging lock. network_info_str = str(network_info) msg = ('Start _get_guest_xml ' 'network_info=%(network_info)s ' 'disk_info=%(disk_info)s ' 'image_meta=%(image_meta)s rescue=%(rescue)s ' 'block_device_info=%(block_device_info)s' % {'network_info': network_info_str, 'disk_info': disk_info, 'image_meta': image_meta, 'rescue': rescue, 'block_device_info': block_device_info}) # NOTE(mriedem): block_device_info can contain auth_password so we # need to sanitize the password in the message. LOG.debug(strutils.mask_password(msg), instance=instance) conf = self._get_guest_config(instance, network_info, image_meta, disk_info, rescue, block_device_info, context) xml = conf.to_xml() LOG.debug('End _get_guest_xml xml=%(xml)s', {'xml': xml}, instance=instance) return xml 生成虚拟机基本配置(nova/virt/libvirt/driver.py ): def _get_guest_config(self, instance, network_info, image_meta, disk_info, rescue=None, block_device_info=None, context=None): """Get config data for parameters. :param rescue: optional dictionary that should contain the key 'ramdisk_id' if a ramdisk is needed for the rescue image and 'kernel_id' if a kernel is needed for the rescue image. """ flavor = instance.flavor inst_path = libvirt_utils.get_instance_path(instance) disk_mapping = disk_info['mapping'] virt_type = CONF.libvirt.virt_type guest = vconfig.LibvirtConfigGuest() guest.virt_type = virt_type guest.name = instance.name guest.uuid = instance.uuid # We are using default unit for memory: KiB guest.memory = flavor.memory_mb * units.Ki guest.vcpus = flavor.vcpus allowed_cpus = hardware.get_vcpu_pin_set() guest_numa_config = self._get_guest_numa_config( instance.numa_topology, flavor, allowed_cpus, image_meta) guest.cpuset = guest_numa_config.cpuset guest.cputune = guest_numa_config.cputune guest.numatune = guest_numa_config.numatune guest.membacking = self._get_guest_memory_backing_config( instance.numa_topology, guest_numa_config.numatune, flavor) guest.metadata.append(self._get_guest_config_meta(instance)) guest.idmaps = self._get_guest_idmaps() for event in self._supported_perf_events: guest.add_perf_event(event) self._update_guest_cputune(guest, flavor, virt_type) guest.cpu = self._get_guest_cpu_config( flavor, image_meta, guest_numa_config.numaconfig, instance.numa_topology) # Notes(yjiang5): we always sync the instance's vcpu model with # the corresponding config file. instance.vcpu_model = self._cpu_config_to_vcpu_model( guest.cpu, instance.vcpu_model) if 'root' in disk_mapping: root_device_name = block_device.prepend_dev( disk_mapping['root']['dev']) else: root_device_name = None if root_device_name: # NOTE(yamahata): # for nova.api.ec2.cloud.CloudController.get_metadata() instance.root_device_name = root_device_name guest.os_type = (fields.VMMode.get_from_instance(instance) or self._get_guest_os_type(virt_type)) caps = self._host.get_capabilities() self._configure_guest_by_virt_type(guest, virt_type, caps, instance, image_meta, flavor, root_device_name) if virt_type not in ('lxc', 'uml'): self._conf_non_lxc_uml(virt_type, guest, root_device_name, rescue, instance, inst_path, image_meta, disk_info) self._set_features(guest, instance.os_type, caps, virt_type) self._set_clock(guest, instance.os_type, image_meta, virt_type) storage_configs = self._get_guest_storage_config( instance, image_meta, disk_info, rescue, block_device_info, flavor, guest.os_type) for config in storage_configs: guest.add_device(config) for vif in network_info: config = self.vif_driver.get_config( instance, vif, image_meta, flavor, virt_type, self._host) guest.add_device(config) self._create_consoles(virt_type, guest, instance, flavor, image_meta) pointer = self._get_guest_pointer_model(guest.os_type, image_meta) if pointer: guest.add_device(pointer) if (CONF.spice.enabled and CONF.spice.agent_enabled and virt_type not in ('lxc', 'uml', 'xen')): channel = vconfig.LibvirtConfigGuestChannel() channel.type = 'spicevmc' channel.target_name = "com.redhat.spice.0" guest.add_device(channel) # NB some versions of libvirt support both SPICE and VNC # at the same time. We're not trying to second guess which # those versions are. We'll just let libvirt report the # errors appropriately if the user enables both. add_video_driver = False if ((CONF.vnc.enabled and virt_type not in ('lxc', 'uml'))): graphics = vconfig.LibvirtConfigGuestGraphics() graphics.type = "vnc" graphics.keymap = CONF.vnc.keymap graphics.listen = CONF.vnc.vncserver_listen guest.add_device(graphics) add_video_driver = True if (CONF.spice.enabled and virt_type not in ('lxc', 'uml', 'xen')): graphics = vconfig.LibvirtConfigGuestGraphics() graphics.type = "spice" graphics.keymap = CONF.spice.keymap graphics.listen = CONF.spice.server_listen guest.add_device(graphics) add_video_driver = True if add_video_driver: self._add_video_driver(guest, image_meta, flavor) # Qemu guest agent only support 'qemu' and 'kvm' hypervisor if virt_type in ('qemu', 'kvm'): self._set_qemu_guest_agent(guest, flavor, instance, image_meta) if virt_type in ('xen', 'qemu', 'kvm'): # Get all generic PCI devices (non-SR-IOV). for pci_dev in pci_manager.get_instance_pci_devs(instance): guest.add_device(self._get_guest_pci_device(pci_dev)) else: # PCI devices is only supported for hypervisor 'xen', 'qemu' and # 'kvm'. pci_devs = pci_manager.get_instance_pci_devs(instance, 'all') if len(pci_devs) > < /proc/cpuinfo ####普通虚拟机 使用默认策略创建虚拟机: $ openstack server create --image cirros --flavor machine.cpu --key-name mykey --nic net-id=8d01509e-4a3a-497a-9118-3827c1e37672 --availability-zone az01:osdev-01 server.cpu.default 查看虚拟机CPU亲和性: $ ps -aux | grep `openstack server show server.cpu.default | grep instance_name | awk '{print $4}'` | awk '{{if($11=="/usr/libexec/qemu-kvm") {print $2}}}' | xargs taskset -c -ppid 40296's current affinity list: 0-17,36-53$ ps -aux | grep `openstack server show server.cpu.default | grep instance_name | awk '{print $4}'` | awk '{{if($11=="/usr/libexec/qemu-kvm") {print $2}}}' | xargs -I {} find /proc/{}/task/ -name "status" | xargs grep Cpus_allowed_list/proc/40296/task/40296/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40310/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40313/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40314/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40315/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40316/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40317/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40318/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40319/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40320/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40321/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40322/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40323/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40324/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40325/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40326/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40327/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40328/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40329/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40337/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40580/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40587/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40590/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40591/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40902/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/40903/status:Cpus_allowed_list: 0-17,36-53/proc/40296/task/41007/status:Cpus_allowed_list: 0-17,36-53 查看虚拟机CPU运行状态: $ ps -aux | grep `openstack server show server.cpu.default | grep instance_name | awk '{print $4}'` | awk '{{if($11=="/usr/libexec/qemu-kvm") {print $2}}}' | xargs ps -m -o pid,psr,comm -p PID PSR COMMAND 40296 - qemu-kvm - 15 - - 42 - - 7 - - 5 - - 2 - - 0 - - 14 - - 41 - - 0 - - 7 - - 0 - - 11 - - 5 - - 43 - - 9 - - 0 - - 5 - - 2 - - 53 -$ ps -aux | grep `openstack server show server.cpu.default | grep instance_name | awk '{print $4}'` | awk '{{if($11=="/usr/libexec/qemu-kvm") {print $2}}}' | xargs ps -m -o pid,psr,comm -p PID PSR COMMAND 40296 - qemu-kvm - 48 - - 42 - - 1 - - 16 - - 3 - - 8 - - 5 - - 8 - - 11 - - 0 - - 4 - - 5 - - 7 - - 5 - - 8 - - 0 - - 13 - - 10 - - 53 - 可以看到CPU一直在变,且既有在一个Core上的,也有不在一个Core上的。 ####avoid虚拟机 设置avoid绑定策略: $ nova flavor-key machine.cpu set hw:cpu_policy=dedicated hw:cpu_thread_policy=avoid 创建虚拟机: $ openstack server create --image cirros --flavor machine.cpu --key-name mykey --nic net-id=8d01509e-4a3a-497a-9118-3827c1e37672 --availability-zone az01:osdev-01 server.cpu.avoidUnexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-fbb4aef8-a2bb-47af-bea0-e776c83ae5e9) 无法创建虚拟机。 查看错误日志: $ tailf /var/lib/docker/volumes/kolla_logs/_data/nova/nova-api.log2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions [req-fbb4aef8-a2bb-47af-bea0-e776c83ae5e9 03e0cf5adea04b73a13bc45a0306171b 1b50364d35624d0e8affe0721866fda1 - default default] Unexpected exception in API method2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions Traceback (most recent call last):2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 338, in wrapped2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions return f(*args, **kwargs)2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions return func(*args, **kwargs)2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions return func(*args, **kwargs)2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions return func(*args, **kwargs)2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions return func(*args, **kwargs)2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions return func(*args, **kwargs)2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions return func(*args, **kwargs)2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 642, in create2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions **create_kwargs)2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/hooks.py", line 154, in inner2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions rv = f(*args, **kwargs)2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/api.py", line 1620, in create2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions check_server_group_quota=check_server_group_quota)2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/api.py", line 1186, in _create_instance2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions reservation_id, max_count)2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/api.py", line 889, in _validate_and_build_base_options2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions instance_type, image_meta)2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/hardware.py", line 1293, in numa_get_constraints2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions cell.cpu_thread_policy = cpu_thread_policy2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 72, in setter2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions field_value = field.coerce(self, name, value)2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_versionedobjects/fields.py", line 195, in coerce2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions return self._type.coerce(obj, attr, value)2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_versionedobjects/fields.py", line 317, in coerce2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions raise ValueError(msg)2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions ValueError: Field value avoid is invalid2018-03-21 19:34:03.100 27 ERROR nova.api.openstack.extensions 2018-03-21 19:34:03.102 27 INFO nova.api.openstack.wsgi [req-fbb4aef8-a2bb-47af-bea0-e776c83ae5e9 03e0cf5adea04b73a13bc45a0306171b 1b50364d35624d0e8affe0721866fda1 - default default] HTTP exception thrown: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. aviod参数无效。 ####prefer虚拟机 设置prefer绑定策略: $ nova flavor-key machine.cpu set hw:cpu_policy=dedicated hw:cpu_thread_policy=prefer 创建虚拟机: $ openstack server create --image cirros --flavor machine.cpu --key-name mykey --nic net-id=8d01509e-4a3a-497a-9118-3827c1e37672 --availability-zone az01:osdev-01 server.cpu.prefer 查看虚拟机CPU亲和性: $ ps -aux | grep `openstack server show server.cpu.prefer | grep instance_name | awk '{print $4}'` | awk '{{if($11=="/usr/libexec/qemu-kvm") {print $2}}}' | xargs taskset -c -ppid 187669's current affinity list: 0-2,7,8,14-16,36-38,43,44,50-52$ ps -aux | grep `openstack server show server.cpu.prefer | grep instance_name | awk '{print $4}'` | awk '{{if($11=="/usr/libexec/qemu-kvm") {print $2}}}' | xargs -I {} find /proc/{}/task/ -name "status" | xargs grep Cpus_allowed_list/proc/187669/task/187669/status:Cpus_allowed_list: 0-2,7-8,14-16,36-38,43-44,50-52/proc/187669/task/187671/status:Cpus_allowed_list: 0-2,7-8,14-16,36-38,43-44,50-52/proc/187669/task/187675/status:Cpus_allowed_list: 43/proc/187669/task/187676/status:Cpus_allowed_list: 7/proc/187669/task/187677/status:Cpus_allowed_list: 16/proc/187669/task/187678/status:Cpus_allowed_list: 52/proc/187669/task/187679/status:Cpus_allowed_list: 2/proc/187669/task/187680/status:Cpus_allowed_list: 38/proc/187669/task/187681/status:Cpus_allowed_list: 8/proc/187669/task/187682/status:Cpus_allowed_list: 44/proc/187669/task/187683/status:Cpus_allowed_list: 50/proc/187669/task/187684/status:Cpus_allowed_list: 14/proc/187669/task/187685/status:Cpus_allowed_list: 0/proc/187669/task/187686/status:Cpus_allowed_list: 36/proc/187669/task/187687/status:Cpus_allowed_list: 51/proc/187669/task/187688/status:Cpus_allowed_list: 15/proc/187669/task/187689/status:Cpus_allowed_list: 1/proc/187669/task/187690/status:Cpus_allowed_list: 37/proc/187669/task/187692/status:Cpus_allowed_list: 0-2,7-8,14-16,36-38,43-44,50-52 查看虚拟机CPU运行状态: $ ps -aux | grep `openstack server show server.cpu.prefer | grep instance_name | awk '{print $4}'` | awk '{{if($11=="/usr/libexec/qemu-kvm") {print $2}}}' | xargs ps -m -o pid,psr,comm -p | awk 'NR>2 {print $2}'| sort-n | uniq | xargs-N1. / cpu_id.sh0 0 011 0 2 0 2 0 7 100 8 11 0 14 24 0 15 0 25 0 16 26 0 36 0 37 10 38 2 0 43 10 0 44 11 0 50 24 0 51 25 0 52 26 0
The virtual machine CPU is bound, and there are two vCPU on the assigned Core (this host uses SMT technology, which is a priority).
View memory allocation:
$ps- aux | grep `openstack server show server.cpu.prefer | grep instance_name | awk'{print $4}'`| awk'{{if ($11mm = "/ usr/libexec/qemu-kvm") {print $2}}'| xargs-I {} cat / proc/ {} / numa_maps | perl numa-maps-summary.plN0: 22942 (0.09 GB) N1: 34 (0.00 GB) active: Anon: 19827 (0.08 GB) dirty: 19852 (0.08 GB) kernelpagesize_kB: 1848 (0.01GB) mapmax: 4217 (0.02GB) mapped: 3127 (0.01GB)
# isolate virtual machine
Set the isolate binding policy:
$nova flavor-key machine.cpu set hw:cpu_policy=dedicated hw:cpu_thread_policy=isolate
Create a virtual machine:
$openstack server create-image cirros-flavor machine.cpu-key-name mykey-nic net-id=8d01509e-4a3a-497a-9118-3827c1e37672-availability-zone az01:osdev-01 server.cpu.isolate
View virtual machine CPU affinity:
$ps-aux | grep `openstack server show server.cpu.isolate | grep instance_name | awk'{print $4}'`| awk'{{if ($11mm = "/ usr/libexec/qemu-kvm") {print $2}}'| xargs taskset-c-ppid 51203s current affinity list: 1819prima 24-26pr 32-59pr 64-67$ ps-aux | grep `openstack server show server.cpu.isolate | grep instance_name | awk'{print $4}'` | awk'{if ($11mm = "/ usr/libexec/qemu) -kvm ") {print $2}'| xargs-I {} find / proc/ {} / task/-name" status "| xargs grep Cpus_allowed_list/proc/51203/task/51203/status:Cpus_allowed_list: 18-19 find 24-26 67/proc/51203/task/51206/status:Cpus_allowed_list: 18-19Me 24-26 57-59 64-67/proc/51203/task/51206/status:Cpus_allowed_list: 18-19 Min 24-26 Min 32-35 Min 32-35 Min Min 32-35 Meng 32-59 Min 64-67 procure 51203 Cpussy allowedding _ List: 59/proc/51203/task/51211/status:Cpus_allowed_list: 65/proc/51203/task/51212/status:Cpus_allowed_list: 18/proc/51203/task/51213/status:Cpus_allowed_list: 34/proc/51203/task/51214/status:Cpus_allowed_list: 24/proc/51203/task/51215/status:Cpus_allowed_list: 33/proc/ 51203/task/51216/status:Cpus_allowed_list: 58/proc/51203/task/51217/status:Cpus_allowed_list: 67/proc/51203/task/51218/status:Cpus_allowed_list: 66/proc/51203/task/51219/status:Cpus_allowed_list: 26/proc/51203/task/51220/status:Cpus_allowed_list: 35/proc/51203/task/51221/status:Cpus_ Allowed_list: 57/proc/51203/task/51222/status:Cpus_allowed_list: 25/proc/51203/task/51223/status:Cpus_allowed_list: 19/proc/51203/task/51224/status:Cpus_allowed_list: 64/proc/51203/task/51225/status:Cpus_allowed_list: 32/proc/51203/task/51227/status:Cpus_allowed_list: 18-19, 24-26, 32-35, 57-59, 64-67
View the virtual machine CPU running status:
$ps-aux | grep `openstack server show server.cpu.isolate | grep instance_name | awk'{print $4}'`| awk'{{if ($11mm = "/ usr/libexec/qemu-kvm") {print $2}}'| xargs ps-m-o pid,psr Comm-p | awk'NR > 2 {print $2}'| sort-n | uniq | xargs-n1. / cpu_id.sh18 0 1 19 11 24 9 1 25 10 1 26 11 1 32 24 1 33 25 1 34 26 1 35 27 1 57 3 58 4 59 8 1 64 17 1 65 18 1 66 19 1 67 20 1
The virtual machine CPU is bound and each vCPU is assigned to a different Core.
View memory allocation:
$ps- aux | grep `openstack server show server.cpu.isolate | grep instance_name | awk'{print $4}'`| awk'{{if ($11mm = "/ usr/libexec/qemu-kvm") {print $2}}'| xargs-I {} cat / proc/ {} / numa_maps | perl numa-maps-summary.plN0: 3077 (0.01GB) N1: 22653 (0.09GB) active: Anon: 22581 (0.09 GB) dirty: 22600 (0.09 GB) kernelpagesize_kB: 1844 (0.01GB) mapmax: 4217 (0.02GB) mapped: 3131 (0.01GB)
# require virtual machine
Set the require binding policy:
$nova flavor-key machine.cpu set hw:cpu_policy=dedicated hw:cpu_thread_policy=require
Create a virtual machine:
Openstack quota set-- cores 100 admin$ openstack server create-- image cirros-- flavor machine.cpu-- key-name mykey-- nic net-id=8d01509e-4a3a-497a-9118-3827c1e37672-- availability-zone az01:osdev-01 server.cpu.require
View virtual machine CPU affinity:
$ps-aux | grep `openstack server show server.cpu.require | grep instance_name | awk'{print $4}'`| awk'{{if ($11mm = "/ usr/libexec/qemu-kvm") {print $2}}'| xargs taskset-c-ppid 194063's current affinity list: 3Jing 5 6 if 9-13 print 39 xargs 41Lecture 45-49$ ps-aux | grep `find | grep instance_name | awk'{if $4}'`| awk'{{if ($11) = "/ usr/libexec/qemu-kvm") {print $2}'| xargs-I {} find / proc/ {} / task/-name "status" | xargs grep Cpus_allowed_list/proc/194063/task/194063/status:Cpus_allowed_list: 3pint 5-6pr 9-13pr 39pr 41-42mrm45-49/proc/ 194063/task/194065/status:Cpus_allowed_list: 3 49/proc/194063/task/194069/status:Cpus_allowed_list 5-6 11/proc/194063/task/194072/status:Cpus_allowed_list 9-13 10/proc/194063/task/194070/status:Cpus_allowed_list: 46/proc/194063/task/194071/status:Cpus_allowed_list: 11/proc/194063/task/194072/status:Cpus_allowed_list: 47/proc/194063/task/194073/status : Cpus_allowed_list: 42/proc/194063/task/194074/status:Cpus_allowed_list: 6/proc/194063/task/194075/status:Cpus_allowed_list: 41/proc/194063/task/194076/status:Cpus_allowed_list: 5/proc/194063/task/194077/status:Cpus_allowed_list: 9/proc/194063/task/194078/status:Cpus_allowed_list: 45/proc/194063 / task/194079/status:Cpus_allowed_list: 3/proc/194063/task/194080/status:Cpus_allowed_list: 39/proc/194063/task/194081/status:Cpus_allowed_list: 48/proc/194063/task/194082/status:Cpus_allowed_list: 12/proc/194063/task/194083/status:Cpus_allowed_list: 49/proc/194063/task/194084/status:Cpus_allowed_list: 13/proc/194063/task/194088/status:Cpus_allowed_list: 3, 5-6, 9-13, 39, 41-42, 45-49
View the virtual machine CPU running status:
Ps-aux | grep `openstack server show server.cpu.require | grep instance_name | awk'{print $4}'`| awk'{{if ($11mm = "/ usr/libexec/qemu-kvm") {print $2}'| xargs ps-m-o pid,psr Comm-p | awk'NR > 2 {print $2}'| sort-n | uniq | xargs-n1. / cpu_id.sh3 3 0 58 0 0 6 9 0 9 0 9 16 0 10 0 11 18 0 12 19 0 13 20 0 39 3 0 41 8 45 16 0 46 17 0 47 18 0 48 19 0 49 20 0
The virtual machine CPU is bound, and there are two vCPU on the assigned Core, similar to prefer, and not assigned to the same Core as the isolate policy.
View memory allocation:
$ps- aux | grep `openstack server show server.cpu.require | grep instance_name | awk'{print $4}'`| awk'{{if ($11mm = "/ usr/libexec/qemu-kvm") {print $2}}'| xargs-I {} cat / proc/ {} / numa_maps | perl numa-maps-summary.plN0: 22939 (0.09 GB) N1: 34 (0.00 GB) active: 1680.00 GB) anon: 19824 (0.08GB) dirty: 19850 (0.08GB) kernelpagesize_kB: 1848 (0.01GB) mapmax: 4217 (0.02GB) mapped: 3127 (0.01GB) above are all the contents of the article "example Analysis of OpenStack Nova scheduling Policy" Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.