In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
OpenStack Computing Service Nova [IV] OpenStack Computing Service Nova [IV]
Openstack
Time: November 28, 2016
Introduction to Nova:
Nova is one of the first two modules of openstack, and the other is object storage swift. In the openstack system, one is called a computing node, and the other is called a control node. This is mainly related to nova. We install it as the compute node nova-compute and call it the control node except nova-compute. Nova-compute creates a virtual machine, just creates a virtual machine, and all the control is on another machine.
Nova has a lot of components (services)
Introduction to Nova Service
API: responsible for accepting and responding to external requests. Support for OpenStack API,EC2 API
Cert: responsible for authentication EC 2
Scheduler: used for CVM scheduling
Conductor: middleware for computing nodes to access data
Consoleauth: authorization verification for the console
Novncproxy:VNC Agent
Hint: EC 2 means Amazon CVM.
Nova scheduler
The role of the Nova scheduler module in openstack is to decide which host (compute node) the virtual machine is created on.
There are two steps to decide which physical node a virtual machine should be scheduled to:
1. Filter (Fliter)
2. Calculate the weight (Weight)
Tip: we often can not find a valid host? Why?
Because nova scheduler thinks you don't have the resources to create a virtual machine, even if you have 100 gigabytes of memory, you can't create it if nova scheduler thinks you're not qualified. The function of scheduler is to decide which host the virtual machine is created on.
After being filtered by the host, you need to calculate the weight of the host and select a corresponding host according to the policy (for each virtual machine to be created)
Control node settings:
We have already done the database and keystone modification earlier, so skip it here
Install the package
[root@linux-node1 ~] # yum install openstack-nova-api openstack-nova-conductor\ > openstack-nova-console openstack-nova-novncproxy\ > openstack-nova-scheduler
Modify the connection address of the database in the configuration file
[root@linux-node1] # vim / etc/nova/nova.conf... [database] connection=mysql+pymysql://nova:nova@192.168.56.11/nova... [api_database] connection=mysql+pymysql://nova:nova@192.168.56.11/nova_api
Tip: don't make any mistakes. Each one is modified under the corresponding module.
Synchronize database
[root@linux-node1] # su-s / bin/sh-c "nova-manage api_db sync" nova [root@linux-node1 ~] # su-s / bin/sh-c "nova-manage db sync" nova
Hint: db's warning can be ignored.
Check if there is a table structure
[root@linux-node1 ~] # mysql-h 192.168.56.11-unova-pnova-e "use nova;show tables;" [root@linux-node1 ~] # mysql-h 192.168.56.11-unova_api-pnova_api-e "use nova_api;show tables;"
Configure keystone
[root@linux-node1] # vim / etc/nova/nova.conf... [keystone_authtoken] auth_uri = http://192.168.56.11:5000auth_url = http://192.168.56.11:35357memcached_servers = 192.168.56.11:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = novapassword = nova... [DEFAULT] auth_strategy=keystone# opens comments, and this line represents authentication using keystone
RabbitMq (message queuing configuration)
Because nova services communicate with each other using message queuing, we need to configure rabbitmq
[root@linux-node1 ~] # vim / etc/nova/ nova.confession [default] rpc_backend=rabbit# search rpc_backend Open comment rabbit_host=192.168.56.11 modify localhost to IP address rabbit_port=5672 port we can open or not open the default rabbit_userid=openstackrabbit_password=openstack this is the user we configured on rabbitmq
Configure some of nova's own features
Set the enabled metadata API [root@linux-node1 ~] # vim / etc/nova/ Nova.confs [default] enabled_apis=osapi_compute,metadata to start the network service support use_neutron=true to turn off the firewall firewall_driver=nova.virt.firewall.NoopFirewallDriver
Tip: by default, the computing service uses the built-in firewall service. Because network services include firewall services, you must use nova.virt.firewall.NoopFirewallDriver firewall services to disable the firewall services built into computing services
We do not configure my IP because My IP is a big hole
Configure the VNC agent to use the management interface IP address of the control node
[root@linux-node1 ~] # vim / etc/nova/nova.confvncserver_listen=192.168.56.11vncserver_proxyclient_address=192.168.56.11
Configure glance mirroring service API
[root@linux-node1 ~] # vim / etc/nova/ Nova. Confession [glance] api_servers= http://192.168.56.11:9292
Configure lock path
[oslo_concurrency]... Lock_path=/var/lib/nova/tmp
Nova configuration description
[root@linux-node1 ~] # grep'^ [a Murz]'/ etc/nova/nova.confenabled_apis=osapi_compute Metadata # start apiauth_strategy=keystone # set up keystonefirewall_driver=nova.virt.firewall.NoopFirewallDriver # turn off firewall use_neutron=true # use neutronrpc_backend=rabbit # use rabbitmqconnection = mysql+pymysql://nova:nova@192.168.56.11/nova_api # database address Connection = mysql+pymysql://nova:nova@192.168.56.11/nova # database address api_servers= http://192.168.56.11:9292 # glance api address auth_uri = http://192.168.56.11:5000 # keystoneauth_url = http://192.168.56.11:35357 # keystonememcached_servers = 192.168.56.11 http://192.168.56.11:35357 11211 # keystoneauth_type = password # keystoneproject_domain_name = default # keystoneuser_domain_name = default # keystoneproject_name = service # keystone username = nova # keystonepassword = nova # keystonelock_path=/var/lib/nova/tmp # Lock path rabbit_host=192.168.56.11 # rabbitmqrabbit_port=5672 # rabbitmqrabbit_userid=openstack # rabbitmqrabbit_password=openstack # rabbitmqvncserver_listen=192.168.56.11 # VNCvncserver_proxyclient_address=192.168.56.11 # VNC
Set up boot and start the service
# systemctl enable openstack-nova-api.service\ openstack-nova-consoleauth.service openstack-nova-scheduler.service\ openstack-nova-conductor.service openstack-nova-novncproxy.service# systemctl start openstack-nova-api.service\ openstack-nova-consoleauth.service openstack-nova-scheduler.service\ openstack-nova-conductor.service openstack-nova-novncproxy.service
After nova starts successfully, you still need to register on keystone, otherwise others will not be able to connect
Create a nova service
[root@linux-node1 ~] # source admin-openstack.sh [root@linux-node1 ~] # openstack service create-- name nova-- description "Openstack Compute" compute+-+--+ | Field | Value | +- -+-+ | description | Openstack Compute | | enabled | True | | id | c9aca55493924f2ba9cb5b304cb1322f | | name | nova | | type | compute | +-- -+
Create a Compute service api endpoint
[root@linux-node1 ~] # openstack endpoint create-- region RegionOne\ > compute public http://192.168.56.11:8774/v2.1/%\(tenant_id\)s+--------------+----------------------------------------------+| Field | Value | | +-+ | enabled | True | | id | 71414f00b2834e8190ee25c219e3d3c4 | | interface | | public | | region | RegionOne | | region_id | RegionOne | | service_id | c9aca55493924f2ba9cb5b304cb1322f | | service_name | nova | | service_type | compute | | | url | http://192.168.56.11:8774/v2.1/%(tenant_id)s | +-+-+ [root@linux-node1 ~] # openstack endpoint create | -- region RegionOne compute admin http://192.168.56.11:8774/v2.1/%\(tenant_id\)s+--------------+----------------------------------------------+| Field | Value | | +-+ | enabled | True | | id | 9162f57b72e244f799086eeca3b7df6c | | interface | admin | | | region | RegionOne | | region_id | RegionOne | | service_id | c9aca55493924f2ba9cb5b304cb1322f | | service_name | nova | | service_type | compute | Url | http://192.168.56.11:8774/v2.1/%(tenant_id)s | +-+-+ [root@linux-node1 ~] # openstack endpoint create-- region RegionOne compute internal http://192.168 .56.11: 8774 tenant_id v2.1 Plus%\ (tenant_id\) Field | Value | +- -- +-+ | enabled | True | | id | 8fb3d0da5ee64ed693b7b4608844d5ff | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | c9aca55493924f2ba9cb5b304cb1322f | | service_name | nova | | service_type | compute | | url | http://192.168.56 | .11: 8774 tenant_id v2.1 Compact% (tenant_id) s | +-+-+
Check whether the control node is successful
[root@linux-node1 ~] # openstack host list+--+ | Host Name | Service | Zone | +-- -+-+ | linux-node1.abcdocker.com | consoleauth | internal | | linux-node1.abcdocker.com | conductor | internal | | linux-node1.abcdocker.com | scheduler | internal | +-+ nova compute node configuration
How many virtual machines we want to build in the architecture depends on the configuration of computing nodes. We can only use VMware for virtualization tools, because VMware supports nested virtualization, while other virtualization software does not.
Our nova computing node IP is 192.168.56.12
Virtualization technology needs to be turned on
Because we need to use kvm to create virtual machines, we need to turn on virtualization. If the server needs to be enabled on bios
Environmental preparation
[root@linux-node2 ~] # cat / etc/redhat-release CentOS Linux release 7.2.1511 (Core) [root@linux-node2 ~] # uname-r3.10.0-327.36.2.el7.x86_64
Time synchronization
[root@linux-node1 ~] # yum install ntpdate-y [root@linux-node1 ~] # ntpdate time1.aliyun.com [root@linux-node1 ~] # timedatectl set-timezone Asia/Shanghai # set time zone [root@linux-node1 ~] # rpm-ivh http://mirrors.aliyun.com/epel/epel-release-latest-7.noarch.rpm
Install the openstack repository
[root@linux-node2 ~] # yum install-y centos-release-openstack-mitaka
Install the openstack client
[root@linux-node1 ~] # yum install-y python-openstackclient
Because the nova of the control node and the configuration of the compute node are the same except that there is no database, here we modify it in the form of scp
Install the openstack SELinux Management Pack
[root@linux-node2 ~] # yum install-y openstack-selinux
Install nova
[root@linux-node2 ~] # yum install-y openstack-nova-compute
Steps:
1. Scp nova.conf from the control node
2. Delete the configuration of database
3. Change the configuration of vnc
4. Set an option for virtualization
Tip: pay attention to the permissions of the nova.conf file
Time must be synchronized!
1. Copy the nova.conf of the control node to the compute node
[root@linux-node1] # scp / etc/nova/nova.conf 192.168.56.12:/etc/nova/The authenticity of host '192.168.56.12 (192.168.56.12)' can't be established.ECDSA key fingerprint is 43:50:3c:fa:03:29:7c:3c:5f:aa:d2:76:b5:8e:d9:54.Are you sure you want to continue connecting (yes/no)? YesWarning: Permanently added '192.168.56.12' (ECDSA) to the list of known hosts.root@192.168.56.12's password: nova.conf 100% 180KB 180.0KB/s 00:00
two。 Enter the computing node to modify
First of all, make sure the permissions are consistent [root@linux-node2 nova] # ll / etc/nova/nova.conf-rw-r- 1 root nova 184332 Nov 18 17:02 / etc/nova/nova.conf
3. Modify the configuration file
[root@linux-node2 nova] # vim / etc/nova/nova.conf#connection = # connection = search mysql to comment out the mysql path
Configure vnc
Novncproxy_base_url= http://192.168.56.11:6080/vnc_auto.htmlvncserver_listen=0.0.0.0vncserver_proxyclient_address=192.168.56.125384 Line enabled=true
Tip: the server component listens for all IP addresses, while the proxy component listens for only the IP address of the compute node's management network interface. The basic URL indicates where you can access the remote console of the instance on that compute node using a web browser.
Select Virtualization Type
Determine if your compute node supports virtualized hardware acceleration.
Egrep-c'(vmx | svm)'/ proc/cpuinfo
If this command returns a value of 1 or not equal to 0, then your compute node supports hardware acceleration and does not require additional configuration.
If this command returns a value of 0, then your compute node does not support hardware acceleration. You must configure libvirt to use QEMU instead of KVM
KVM article: http://www.abcdocker.com/abcdocker/1627
Make the following edits in the [libvirt] area of the / etc/nova/nova.conf file:
[libvirt]... Virt_type=kvm# configuration Virtualization Type
Summary
Nova.conf modified the following five lines
[root@linux-node2 nova] # grep'^ [a Murz]'/ etc/nova/nova.conf... Enabled=truevncserver_listen=0.0.0.0vncserver_proxyclient_address=192.168.56.12novncproxy_base_url= http://192.168.56.11:6080/vnc_auto.htmlvirt_type=kvm
Set up boot boot
[root@linux-node2 ~] # systemctl enable libvirtd openstack-nova-compute [root@linux-node2 ~] # systemctl start libvirtd openstack-nova-compute
List the service components to verify that each process was successfully started and registered:
[root@linux-node1 ~] # source admin-openstack.sh [root@linux-node1 ~] # openstack host list+--+ | Host Name | Service | Zone | +- -+ | linux-node1.abcdocker.com | consoleauth | internal | | linux-node1.abcdocker.com | conductor | internal | | linux-node1.abcdocker.com | scheduler | internal | | linux-node2.abcdocker.com | compute | nova | + The output should show that three service components are enabled on the control node A service component is enabled on the compute node.
Check to see if nova and keystone are normal
[root@linux-node1 ~] # nova service-list+----+-+--+-- -+ | 1 | nova-consoleauth | linux-node1.abcdocker.com | internal | enabled | up | 2016-11-18T09:24:23.000000 |-| | 2 | nova-conductor | | linux-node1.abcdocker.com | internal | enabled | up | 2016-11-18T09:24:22.000000 |-| | 3 | nova-scheduler | linux-node1.abcdocker.com | internal | enabled | up | 2016-11-18T09:24:23.000000 |-| 6 | nova-compute | linux-node2.abcdocker.com | nova | enabled | up | 2016-11-18T09:24:23.000000 |-| | | +-+ -+
Check whether nova and glance services are normal with each other
[root@linux-node1 ~] # nova pawning picpathListListListListLok + | ID | Name | Status | Server | +- -- + | fc67361d-ad30-40b2-9d96-941e50fc17f5 | cirros | ACTIVE | | + +-+ +
Hint again: time must be synchronized!
This is the end of nova installation!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.