Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction to openstack octavia and manual installation process

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Openstack octavia is a daemon supported by openstack lbaas that provides load balancing for virtual machine traffic. In essence, it is similar to trove, calling nove and neutron's api to generate a virtual machine with haproxy and keepalived software installed and connected to the target network. Octavia has four components, housekeeping,worker,api,health-manager,octavia agent. The role of api will not be discussed in detail. Worker: the main function is to communicate with components such as nova,neutron, to schedule virtual machines and to send instructions for the operation of virtual machines to octavia agent. Housekeeping: check out octavia/controller/housekeeping/house_keeping.py to learn that it has three function points: SpareAmphora,DatabaseCleanup,CertRotation. The order is to clean up the pool of the virtual machine, clean up the expired database, and update the certificate. Health-manager: check the status of the virtual machine and communicate with the octavia agent in the virtual machine to update the status of each component. Octavia agent is located inside the virtual machine: the bottom is to accept instructions to operate the lower-level haproxy software, and the top is to communicate with health-manager to report various situations. You can refer to the blog post http://lingxiankong.github.io/blog/2016/03/30/octavia/?utm_source=tuicool&utm_medium=referral

It's a little more detailed than I am.

Currently, the official installation documentation is not available. Google does not seem to have written specific installation steps, only recommended to use devstack for installation. I try to summarize the steps of installing devstack according to the installation script of octavia, and verify that it is successful. Please correct the inadequacies.

One installation

1. Create a database

Mysql > CREATE DATABASE octavia;mysql > GRANT ALL PRIVILEGES ON octavia.* TO 'octavia'@'localhost' IDENTIFIED BY' OCTAVIA_DBPASS';mysql > GRANT ALL PRIVILEGES ON octavia.* TO 'octavia'@'%'\ IDENTIFIED BY' OCTAVIA_DBPASS'

2 create a user role endpoint

Openstack user create-- domain default-- password-prompt octaviaopenstack role add-- project service-- user cinder adminopenstack endpoint create octavia public http://10.1.65.58:9876/-- region RegionOne openstack endpoint create octavia admin http://10.1.65.58:9876/-- region RegionOne openstack endpoint create octavia internal http://10.1.65.58:9876/-- region RegionOne

3 install the software package

Yum install openstack-octavia-worker openstack-octavia-api python-octavia openstack-octavia openstack-octavia openstack-octavia

4 Import image image is derived from the system generated by devstack

Openstack p_w_picpath create amphora-x64-haproxy-public-container-format=bare-disk-format qcow2

5 create a management network and create an ovs port on the host to enable octavia-worker,octavia-housekeeping,octavia-health-manager to communicate with the generated virtual machine instance

5.1 generate management network, network segment

Openstack network create lb-mgmt-netopenstack subnet create-- subnet-range 192.168.0.0Accord 24-- allocation-pool start=192.168.0.2,end=192.168.0.200-- network lb-mgmt-net lb-mgmt-subnet

5.2 generate management port firewall rules

Port 5555 is a management network. Considering that the octavia component is not yet mature, port 22 is enabled, and the image itself is port 22. This complains that trove, which is also an immature module, does not open port 22 by default, and you have to change the source code.

Openstack security group create lb-mgmt-sec-grpopenstack security group rule create-protocol udp-dst-port 5555 lb-health-mgr-sec-grpopenstack security group rule create-protocol tcp-dst-port 22 lb-mgmt-sec-grp

5.3 create a port in the management network to connect to the octavia health_manager in the host

Neutron port-create-name octavia-health-manager-standalone-listen-port-security-group lb-health-mgr-sec-grp-device-owner Octavia:health-mgr-binding:host_id=controller lb-mgmt-net

5.4 create an ovs port for the host and connect to the network generated by 5.1

Ovs-vsctl-may-exist add-port br-int o-hm0-set Interface o-hm0 type=internal-set Interface o-hm0 external-ids:iface-status=active-set Interface o-hm0 external-ids:attached-mac=fa:16:3e:6f:9f:9a-set Interface o-hm0 external-ids:iface-id=457e4953-b2d6-49ee-908b-2991506602b2

Where iface-id and attached-mac are the attributes of the generated port

Ip link set dev o-hm0 address fa:16:3e:6f:9f:9a

Create a dhcp on the host (why not use the traditional dnsmasq? )

Dhclient-v o-hm0-cf / etc/octavia/dhcp/dhclient.conf

6 configuration modification, similar to other openstack component settings

6.1 set up database

[database] connection = mysql+pymysql://octavia:OCTAVIA_DBPASS@controller/octavia

6.2 set up message queuing

[oslo_messaging_rabbit] rabbit_host = controllerrabbit_userid = openstackrabbit_password = RABBIT_PASS

6.3 set authentication information for keystone

[keystone_authtoken] auth_version = 2admin_password = OCTAVIA_PASSadmin_tenant_name = octaviaadmin_user = octaviaauth_uri = http://controller:5000/v2.0

6.4 sets the listening address of the health_manager component, which is equal to the io address created in 5. 3

[health_manager] bind_port = 5555bind_ip = 192.168.0.7controller_ip_port_list = 192.168.0.7pur5555

6.5 set the public key and private key to communicate with the virtual machine

[haproxy_amphora] server_ca / etc/octavia/certs/ca_01.pemclient_cert = / etc/octavia/certs/client.pemkey_path = / etc/octavia/.ssh/octavia_ssh_keybase_path = / var/lib/octaviabase_cert_dir = / var/lib/octavia/certsconnection_max_retries = 1500connection_retry_interval = 1

6.6 set the information used to enable the virtual machine instance

[controller_worker] amp_boot_network_list = 826be4f4-a23d-4c5c-bff5-7739936fac76 # idamp_p_w_picpath_tag = amphora # the metadataamp_secgroup_list = d949202b-ba09-4003-962f-746ae75809f7 # generated in step 4 has been defined idamp_flavor_id = dd49b3d5-4693-4407-a76e-2ca95e00a9ecamp_p_w_picpath_id = b23dda5f-210f-40e6-9c2c-c40e9daa661a # step The image generated in 4 idamp_ssh_key_name = 155 # amp_active_wait_sec = 1amp_active_retries = 100network_driver = allowed_address_pairs_drivercompute_driver = compute_nova_driveramphora_driver = amphora_haproxy_rest_driver

7 modify neutron configuration

7.1Modification / etc/neutron/neutron.conf adds lbaas service

Service_plugins = [existing service plugins], neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2

7.2 set the service provider of lbaas to octavia in the [service_providers] section

Service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default

8 start the service

If the LBaaS v2 with an agent service is enabled before, please turn off and clean the lbaas_loadbalancers lbaas_loadbalancer_statistics under the neutron database, otherwise an error will be reported.

Synchronize database

Octavia-db-manage upgrade head

Restart neutron

Systemctl restart neutron-server

Start octavia

Systemctl restart octavia-housekeeping octavia-worker octavia-api octavia-health-manager

Second verification

9.1 create loadbalancer

[root@controller ~] # neutron lbaas-loadbalancer-create-- name test-lb-1 lbtestCreated a new loadbalancer:+-+--+ | Field | Value | +- -+-+ | admin_state_up | True | | description | | id | 5af472bbmur2068- 4b96-bcb3-bef7ff7abc56 | | listeners | | name | test-lb-1 | | operating_status | OFFLINE | | pools | provider | octavia | | | provisioning_status | PENDING_CREATE | | tenant_id | 9a4b2de78c2d45cfbf6880dd34877f7b | | vip_address | 192.168.123.10 | | vip_port_id | d163b73c-258a-4e03-90ad-5db31cfe23ac | | vip_subnet_id | 74aea53a-014a-4f9c-86f9-805a2a772a27 | +-| -+

9.2 look at the virtual machine, it is worth noting that the address of loadbalancer is vip, which is different from the address of the virtual machine.

[root@controller ~] # openstack server list | grep 82b59e85-29f2-46ce-ae0b-045b7fceb5ca | 82b59e85-29f2-46ce-ae0b-045b7fceb5ca | amphora-734da57c-e444-4b8e-a706-455230ae0803 | ACTIVE | lbtest=192.168.123.9; lb-mgmt-net=192.168.0.6 | amphora-x64-haproxy 201610131607 |

9.3 create linstener

Neutron lbaas-listener-create-name test-lb-tcp-loadbalancer test-lb-1-protocol TCP-protocol-port 22

9.4 set security group

Neutron port-update-- security-group default d163b73c-258a-4e03-90ad-5db31cfe23ac

9.5 create pool, create three new virtual machines, and join pool

Openstack server create-- flavor m1.small-- nic net-id=22525640-297e-40eb-bd77-0a9afd861f8c-p_w_picpath "cirros for kvm raw"-- min 3-- max 3 test [root@controller ~] # openstack server list | grep test- | d8dc22d4-e657-4c54-96f9-3a53ca67533d | test-3 | ACTIVE | lbtest=192.168.123.8 | Cirros for kvm raw | | c7926665-84c5-48a5-9de5-5e15e71baa5d | test-2 | ACTIVE | lbtest=192.168.123.13 | cirros for kvm raw | | fcf60c23-b799-4d08-a5a7-2b0fc9f1905e | test-1 | ACTIVE | lbtest=192 .168.123.11 | cirros for kvm raw | neutron lbaas-pool-create-- name test-lb-pool-tcp-- lb-algorithm ROUND_ROBIN-- listener test-lb-tcp-- protocol TCP for i in {8je 13j 11} doneutron lbaas-member-create-- subnet lbtest-- address 192.168.123.$ {I}-- protocol-port 22 test-lb-pool-tcpdone

9.6 Verification

[root@controller] # > / root/.ssh/known_hosts;ip netns exec qrouter-4718cc34-68cc-47a7-9201-405d1fc09213 ssh cirros@192.168.123.10 "hostname" The authenticity of host '192.168.123.10 (192.168.123.10)' can't be established.RSA key fingerprint is 72:c4:11:41:53:51:f2:1b:b5:e6:1b:69:a8:c2:5b:d4.Are you sure you want to continue connecting (yes/no)? YesWarning: Permanently added '192.168.123.10' (RSA) to the list of known hosts.cirros@192.168.123.10's password: test-3 [root@controller ~] # > / root/.ssh/known_hosts Ip netns exec qrouter-4718cc34-68cc-47a7-9201-405d1fc09213 ssh cirros@192.168.123.10 "hostname" The authenticity of host '192.168.123.10 (192.168.123.10)' can't be established.RSA key fingerprint is 3d:88:0f:4a:b1:77:c9:6a:fd:82:4d:31:0c:ca:82:d8.Are you sure you want to continue connecting (yes/no)? YesWarning: Permanently added '192.168.123.10' (RSA) to the list of known hosts.cirros@192.168.123.10's password: test-1 [root@controller ~] # > / root/.ssh/known_hosts Ip netns exec qrouter-4718cc34-68cc-47a7-9201-405d1fc09213 ssh cirros@192.168.123.10 "hostname" The authenticity of host '192.168.123.10 (192.168.123.10)' can't be established.RSA key fingerprint is 1c:03:f0:f9:92:a7:0f:5d:9d:09:22:14:94:62:e4:c4.Are you sure you want to continue connecting (yes/no)? YesWarning: Permanently added '192.168.123.10' (RSA) to the list of known hosts.cirros@192.168.123.10's password: test-2

Three-process analysis

10.1 related operations of worker

Create a CVM instance and associate it with the management network:

REQ: curl-g-I-X POST http://controller:8774/v2.1/9a4b2de78c2d45cfbf6880dd34877f7b/servers-H "User-Agent: python-novaclient"-H "Content-Type: application/json"-H "Accept: application/json"-H "X-OpenStack-Nova-API-Version: 2.1"-H "X-Auth-Token: {SHA1} 0f810ab0fdd5b92489f73a7f0988adfc9da4e517"-d'{"server": {"name": "amphora-4f22d55b-0680-4111-aef6-da98c9ccd1d4" "p_w_picpathRef": "b23dda5f-210f-40e6-9c2c-c40e9daa661a", "key_name": "155"," flavorRef ":" dd49b3d5-4693-4407-a76e-2ca95e00a9ec "," max_count ": 1," min_count ": 1," personality ": [{" path ":" / etc/octavia/amphora-agent.conf "," contents ":"}, {" path ":" / etc/octavia/certs/client_ca.pem " "contents": "="}, {"path": "/ etc/octavia/certs/server.pem", "contents": ""}], "networks": [{"uuid": "826be4f4-a23d-4c5c-bff5-7739936fac76"}], "security_groups": [{"name": "d949202b-ba09-4003-962f-746ae75809f7"}] "config_drive": true}'_ http_log_request / usr/lib/python2.7/site-packages/keystoneauth2/session.py:337

Proceed to the next step when it is detected that the management network status of the target CVM changes to active

REQ: curl-g-I-X GET http://controller:8774/v2.1/9a4b2de78c2d45cfbf6880dd34877f7b/servers/d3c97360-56b2-4f75-b905-2ef83870a342/os-interface-H "User-Agent: python-novaclient"-H "Accept: application/json"-H "X-OpenStack-Nova-API-Version: 2.1"-H "X-Auth-Token: {SHA1} 3f6ccac4cb8b70b06fb5e62b9db2272702d8ec67" _ http_log_request / usr/lib/python2.7/site-packages/keystoneauth2/session.py: 29993 DEBUG novaclient.v2.client [-] RESP: [3372016] Content-Length: 286 Content-Type: application/json Openstack-Api-Version: compute 2.1 X-Openstack-Nova-Api-Version: 2.1 Vary: OpenStack-API-Version X-OpenStack-Nova-API-Version X-Compute-Request-Id: req-ccc07b37-e942-4a5b-a87a-b0e8d3887ba3 Date: Mon, 17 Oct 2016 04:06:30 GMT Connection: keep-aliveRESP BODY: {"interfaceAttachments": [{"port_state": "ACTIVE", "fixed_ips": [{"subnet_id": "4e3409e5-4e9a-4599-9b2e-f760b2fab380", "ip_address": "192.168.0.11"}] "port_id": "bbf99a69-0fb2-42a6-b7de-b7969bda9d73", "net_id": "826be4f4-a23d-4c5c-bff5-7739936fac76", "mac_addr": "fa:16:3e:01:04:2c"}} 2016-10-17 12 0fb2 06 DEBUG octavia.controller.worker.tasks.amphora_driver_tasks 30.078 29993 Finalized the amphora. Execute / usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py:164

Create the port of the vip for external service

2016-10-17 12 12 octavia.controller.worker.tasks.network_tasks.AllocateVIP' 06 octavia.controller.worker.tasks.network_tasks.AllocateVIP' 30.226 29993 DEBUG octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.network_tasks.AllocateVIP' (af8ea5a0-42c8-4d30-9ffa-016668811fc8) transitioned into state' RUNNING' from state 'PENDING' _ task_receiver / usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:1892016-10-17 12 12 Fringe 06 octavia.controller.worker.tasks.network_tasks.AllocateVIP' 30.227 29993 DEBUG octavia.controller.worker.tasks.network_tasks [-] Allocate _ vip port_id c7d7b552-83ac-4e0c-84bf-0b9cae661eab Subnet_id 74aea53a-014a-4f9c-86f9-805a2a772a27Magnesia address 192.168.123.31 execute / usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/network_tasks.py:328

Create an actual port under the vip and send the port attach to the CVM

2016-10-17 12 1627d28d-bf54 06Partition 32.662 29993 DEBUG octavia.network.drivers.neutron.allowed_address_pairs [-] Created vip port: 1627d28d-bf54-46eb-9d78-410c5d647bf4 for amphora: 3f6e22a1-e0b0-4098-ba20-daf47cfdae19 _ plug_amphora_vip / usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py:972016-10-17 12VO6 purl 32.663 29993 DEBUG novaclient.v2.client [-] REQ: curl-g-I- X POST http://controller:8774/v2.1/9a4b2de78c2d45cfbf6880dd34877f7b/servers/d3c97360-56b2-4f75-b905-2ef83870a342/os-interface-H "User-Agent: python-novaclient"-H "Content-Type: application/json"-H "Accept: application/json"-H "X-OpenStack-Nova-API-Version: 2.1"-H "X-Auth-Token: {SHA1} 3f6ccac4cb8b70b06fb5e62b9db2272702d8ec67"-d'{"interfaceAttachment": {"port_id": "1627d28d-bf54" -46eb-9d78-410c5d647bf4 "}'_ http_log_request / usr/lib/python2.7/site-packages/keystoneauth2/session.py:337

Create listener

2016-10-17 19 request url 01V 09.384 29993 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url https://192.168.0.9:9443/0.5/listeners/c3a1867c-b2e5-49a7-819b-7a7d39063dda/reload request / usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:2482016-10-17 19V 01V 09.412 29993 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connected to amphora. Response: request / usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:2702016-10-17 19 usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:2702016 01era 09.414 29993 DEBUG octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.amphora_driver_tasks.ListenersUpdate' (0f588287-a383-4c70-9847-20187dd19f9f) transitioned into state 'SUCCESS' from state' RUNNING' with result 'None' _ task_receiver / usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:178

10.2

Octavia agent analysis

Create a listening port on port 9443 for worker and health-manager access

2016-10-17 12 Running on 1043 INFO werkzeug [-] * Running on https://0.0.0.0:9443/ (Press CTRL+C to quit)

Octavia agent seems to have bug and does not display debug information.

11 High availability Test

Change loadbalancer_topology = SINGLE in the / etc/octavia/octavia.conf configuration file to ACTIVE_STANDBY to enable high availability mode. Currently, dual ACTIVE is not supported.

After generating the loadbalancer, you can see that two virtual machines are generated

[root@controller octavia] # neutron lbaas-loadbalancer-create-- name test-lb1238 lbtest2

Created a new loadbalancer:+-+--+ | Field | Value | +-+- -- + | admin_state_up | True | | description | | id | 4e43f3c7-c0f6-44c7-8dab-e2fc8ed16e0f | | listeners | | | name | test-lb1238 | | operating_status | OFFLINE | | pools | | provider | octavia | | provisioning_status | PENDING_CREATE | | | tenant_id | 9a4b2de78c2d45cfbf6880dd34877f7b | | vip_address | 192.168.235.14 | | vip_port_id | 42f72c9f-4623-4bf5-ae82-29f8cf588d2d | | vip_subnet_id | 52e93565-eab2-4316-a04c-3e554992c993 | +-- | -- + [root@controller ~] # openstack server list | grep 192.168.235 | | 736b8b76-2918-49a7-8477-995a168709bd | amphora-5379f109-01fa-429c-860b-c739e0c5ad5e | ACTIVE | lb-mgmt-net=192.168.0.8 Lbtest2=192.168.235.10 | amphora-x64-haproxy 201610131607 | | bd867667-b8d2-49c5-bb1e-54f0753d33a3 | amphora-23540889-b07e-4c0e-ab9b-df0075fbb9c3 | ACTIVE | lb-mgmt-net=192.168.0.25; lbtest2=192.168.235.19 | amphora-x64-haproxy 201610131607

We see that the three ip:vip are 192.168.235.14, and the exit ip of the two machines are 192.168.235.10 and 192.168.235.19.

Log in to the virtual machine to verify that the ip logged in is the ip that manages the network:

[root@controller ~] # ssh 192.168.0.8 "ps-ef | grep keepalived Cat / var/lib/octavia/vrrp/octavia-keepalived.conf "root 1868 10 04:40? 00:00:00 / usr/sbin/keepalived-D-d-f / var/lib/octavia/vrrp/octavia-keepalived.confroot 1869 1868 0 04:40? 00:00:00 / usr/sbin/keepalived-D-d-f / var/lib/octavia/vrrp/octavia-keepalived.confroot 1870 1868 0 04:40? 00:00:00 / usr/sbin/keepalived-D-d-f / var/lib/octavia/vrrp/octavia-keepalived.confroot 5448 5377 0 05:00? 00:00:00 bash-c ps-ef | grep keepalived Cat / var/lib/octavia/vrrp/octavia-keepalived.confroot 5450 5448 0 05:00? 00:00:00 grep keepalivedvrrp_script check_script {script / var/lib/octavia/vrrp/check_script.sh interval 5 fall 2 rise 2} vrrp_instance 4e43f3c7c0f644c78dabe2fc8ed16e0f {state MASTER interface eth2 virtual_router_id 1 priority 100 nopreempt garp_master_refresh 5 garp_master_refresh_repeat 2 advert_int 1 authentication {auth_type PASS auth_pass ee46125} unicast_src_ip 192. 168.235.10 unicast_peer {192.168.235.19} virtual_ipaddress {192.168.235.14} track_script {check_script}} [root@controller ~] # ssh 192.168.0.8 "ps-ef | grep haproxy Cat / var/lib/octavia/836053f0-ea72-46ae-9fae-8b80153ef593/haproxy.cfg "nobody 2195 10 04:43? 00:00:00 / usr/sbin/haproxy-f / var/lib/octavia/836053f0-ea72-46ae-9fae-8b80153ef593/haproxy.cfg-L jrwLnRhlvXcPd21JhvXEMStRHh0-p / var/lib/octavia/836053f0-ea72-46ae-9fae-8b80153ef593/836053f0-ea72-46ae-9fae-8b80153ef593.pid-sf 2154root 6745 6676 0 05:06? 00:00:00 bash-c ps-ef | grep haproxy Cat / var/lib/octavia/836053f0-ea72-46ae-9fae-8b80153ef593/haproxy.cfgroot 6747 6745 0 05:06? 00:00:00 grep haproxy# Configuration for test-lb1238global daemon user nobody group nogroup log / dev/log local0 log / dev/log local1 notice stats socket / var/lib/octavia/836053f0-ea72-46ae-9fae-8b80153ef593.sock mode 0666 level userdefaults log global retries 3 option redispatch timeout connect 5000 timeout client 50000 timeout server 50000peers 836053f0ea7246ae9fae8b80153ef593_peers Peer 3OduZJiPzm475Q7IgyshE5oq1Jk 192.168.235.19 46ae-9fae-8b80153ef593 option tcplog bind 1025 peer jrwLnRhlvXcPd21JhvXEMStRHh0 192.168.235.10:1025frontend 836053f0-ea72-46ae-9fae-8b80153ef593 option tcplog bind 192.168.235.14 mode tcp default_backend 457d4de5 22 mode tcp default_backend 457d4de5-3213-4969-8f20-1f2d3505ff1ebackend 457d4de5-3213-4969-8f20-1f2d3505ff1e mode tcp balance leastconn timeout check 5 server fa28676f-a762-4a8e-91ab-7a83f071b62b 192.168.235.20 weight 1 check inter 5s fall 3 rise 3 server 1ded44da-cba5-434c-8578 -95153656c392 192.168.235.24 check inter 22 weight 1 check inter 5s fall 3 rise 3

The result of the other one is similar.

Conclusion: the high availability of octavia is accomplished by haproxy plus keepalived.

IV. Other

1. There is an option under services_lbaas.conf

[octavia]

Request_poll_timeout = 200

The definition of this option, after the creation of loadbalancer, after this time, if the state of octavia has not yet become active,neutron, the loadbalancer will be set to error. The default value is 100. in my environment, it will be too late for high availability mode. The log is as follows:

2016-10-19 09 4463-adcd-3cb99826b600 38 4463-adcd-3cb99826b600 26. 392 6256 DEBUG neutron_lbaas.drivers.octavia.driver [req-bee3619a-f9d4-4463-adcd-3cb99826b600 -] Octavia reports load balancer 2676dac6-c41d-4501-9c41-781a176c6baf has provisioning status of PENDING_CREATE thread_op / usr/lib/python2.7/site-packages/neutron_lbaas/drivers/octavia/driver.py:752016-10-19 09 9 Ride 38 Octavia reports load balancer 2676dac6-c41d-4501 29.393 6256 DEBUG neutron_lbaas.drivers.octavia.driver [req-bee3619a -f9d4-4463-adcd-3cb99826b600 -] Timeout has expired for load balancer 2676dac6-c41d-4501-9c41-781a176c6baf to complete an operation. The last reported status was PENDING_CREATE thread_op / usr/lib/python2.7/site-packages/neutron_lbaas/drivers/octavia/driver.py:94

2. Examples of minor modifications to the source code:

Push to the alarm system when the loadbalancer status of neutron changes to active or error

Modify / usr/lib/python2.7/site-packages/neutron_lbaas/drivers/octavia/driver.py

If prov_status = 'ACTIVE' or prov_status = =' DELETED': kwargs = {'delete': delete} if manager.driver.allocates_vip and lb_create: kwargs [' lb_create'] = lb_create # TODO (blogan): drop fk constraint on vip_port_id to ports # table because the port can't be removed unless the load # balancer has been deleted. Until then we won't populate the # vip_port_id field. # entity.vip_port_id = octavia_lb.get ('vip'). Get (' port_id') entity.vip_address = octavia_lb.get ('vip'). Get (' ip_address') manager.successful_completion (context, entity) * * kwargs) if prov_status = = 'ACTIVE': urllib2.urlopen (' http://********') LOG.debug ("report status to* {0} {1}" .format (entity.root_loadbalancer.id, prov_status)) return elif prov_status = = 'ERROR': manager.failed_completion (context) Entity) urllib2.urlopen ('http://*******') LOG.debug ("report status to * {0} {1}" .format (entity.root_loadbalancer.id, prov_status)) return

3. Octavia's database and neutron are not the same table, but there are a lot of data requirements to be consistent. Be sure to keep the two related data synchronized. If out of sync, it will bring a lot of problems and personal experience.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report