Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deploy OpenStack for juno

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article is about how to deploy OpenStack for juno. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

Remarks before installation

The official recommended minimum configuration for each node is as follows:

Controller Node: 1 processor, 2 GB memory, and 5 GB storage

Network Node: 1 processor, 512 MB memory, and 5 GB storage

Compute Node: 1 processor, 2 GB memory, and 10 GB storage

In practice, it is found that, taking the Ubuntu14.04LTS virtual machine as an example, after the controller node is set according to 8G storage space, there will still be insufficient disk space in the later stage, so it is recommended to set the disk space according to 12G capacity.

Since the virtual machine installation is used in this paper, you can first configure a virtual machine template to do well some unified modification schemes and save some workload, as shown below:

Change ```

Vi / etc/ hosts``` file, add the following code: 10.10.10.10 controller 10.10.11 compute 10.10.12 network 10.10.13 block 10.10.10.14 object1 10.10.10.15 object2``2. Install the latest OpenStack release package and execute the following command: install Ubuntu Cloud archive keyring and repositoryapt-get install ubuntu-cloud- keyring```; # echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu trusty-updates/juno main" > / etc/apt/sources.list.d/cloudarchive- juno.list``` update all installation packages apt-get update & & apt-get dist- upgrade```

Install NTP

# apt-get install ntp``` # Network configuration! [network basic architecture diagram] (the picture on https://wt-prj.oss.aliyuncs.com/d73bb2b1eddf48eeb64933373734de51/d0522985-0bea-402e-83f9-1306bdd988ac.png) shows the basic network architecture diagram used in this article. The previous trial of the network IP configuration using official documents will lead to failure to connect to the external network (the specific reason is unknown. I feel that the network segment of IP and the network segment of the host using the virtual machine are in conflict because they are in the same network segment, and then tested with the Ubuntu server version. It is also a gateway problem. You can connect to the network after commenting out the gateway of eth0 in the settings file. Therefore, refer to [OPENSTACK JUNO: INSTALLATION USING VIRTUALBOX & UBUNTU 14.10 (BASIC ENVIRONMENT)-1] for the setting of IP (the IP setting in the http://chaalpritam.blogspot.jp/2015/03/openstack-juno-installation-basic-environment.html) article is described in detail below. The specific IP configuration scheme is as follows: | Network | Network segment used | |: |: | The management network | 10.10.10.1 | The tunnel network | 10.20.20.1 | | The external network | 192.168.100.1 | IP of each node in the management network is set as follows: | Node | use IP |: |: | controller | 10.10.10.10 | compute | 10.10.10.11 | | network | 10.10.10.12 | | block | 10.10.10.13 | object1 | 10.10.10.14 | | object2 | 10.10.10.15 | before performing specific operations on each node First set the virtual network of the virtual machine network, enter VirtualBox- > Global Settings-> Network-> Host (Host-Only) network only: 1. Add network VirtualBox Host-Only Ethernet Adapter # 2, and set the IPv4 address to 10.10.10.1 in the host virtual network interface with a network mask of 255.255.255.0. Add network VirtualBox Host-Only Ethernet Adapter # 3, set the IPv4 address to 10.20.20.1 in the host virtual network interface, set the IPv4 network mask to 255.255.255.0, do not enable the server option. Add network VirtualBox Host-Only Ethernet Adapter # 4, set the IPv4 address to 192.168.100.1 in the host virtual network interface, set the IPv4 network mask to 255.255.255.0, do not enable the server option; # # Network configuration controller node virtual machine network settings, Settings-> Network: 1. Network card 1, connection mode-> host (Host-Only) adapter only, interface name-> VirtualBox Host-Only Ethernet Adapter # 2, control chip-> quasi-virtual network (virtio-net), hybrid mode-> all allowed, access network cable-> check; 2. Network card 2, connection mode-> network address translation (NAT), control chip-> quasi-virtual network (virtio-net), access network cable-> check. After starting the virtual machine, configure its network, and add the following code by changing the ```# vi / etc/network/ interfaces``` file: The management network interface

Auto eth0 iface eth0 inet static address 10.10.10.10 netmask 255.255.255.0

The NAT network

Auto eth2 iface eth2 inet dhcp```

Configure named solution, change ```

Vi / etc/hostname file, change the hostname to controller, change the vi / etc/ hosts``` file, and add the following code: restart the system activation configuration after 10.10.10.10 controller10.10.10.11 compute10.10.10.12 network10.10.10.13 block10.10.10.14 object110.10.10.15 object2```. # # Network configuration on compute node compute node virtual machine network settings, Settings-> Network: 1. Network card 1, connection mode-> host (Host-Only) adapter only, interface name-> VirtualBox Host-Only Ethernet Adapter # 2, control chip-> quasi-virtual network (virtio-net), hybrid mode-> all allowed, access network cable-> check; 2. Network card 2, connection mode-> host (Host-Only) adapter only, interface name-> VirtualBox Host-Only Ethernet Adapter # 3, control chip-> quasi-virtual network (virtio-net), hybrid mode-> all allowed, access network cable-> check; 3. Network card 3, connection mode-> network address translation (NAT), control chip-> quasi-virtual network (virtio-net), access network cable-> check. After starting the virtual machine, configure its network, and add the following code by changing the ```# vi / etc/network/ interfaces``` file: The management network interface

Auto eth0 iface eth0 inet static address 10.10.10.11 netmask 255.255.255.0

The tunnel network interface

Auto eth2 iface eth2 inet static address 10.20.20.11 netmask 255.255.255.0

The NAT network

Auto eth3 iface eth3 inet dhcp```

Configure named solution, change ```

Vi / etc/hostname file, change the hostname to compute, change the vi / etc/ hosts``` file, and add the following code: restart the system activation configuration after 10.10.10.10 controller10.10.10.11 compute10.10.10.12 network10.10.10.13 block10.10.10.14 object110.10.10.15 object2```. # # Network configuration on network node network node virtual machine network settings, Settings-> Network: 1. Network card 1, connection mode-> host (Host-Only) adapter only, interface name-> VirtualBox Host-Only Ethernet Adapter # 2, control chip-> quasi-virtual network (virtio-net), hybrid mode-> all allowed, access network cable-> check; 2. Network card 2, connection mode-> host (Host-Only) adapter only, interface name-> VirtualBox Host-Only Ethernet Adapter # 3, control chip-> quasi-virtual network (virtio-net), hybrid mode-> all allowed, access network cable-> check; 3. Network card 3, connection mode-> host (Host-Only) adapter only, interface name-> VirtualBox Host-Only Ethernet Adapter # 4, control chip-> quasi-virtual network (virtio-net), hybrid mode-> all allowed, access network cable-> check; 4. Network card 4, connection mode-> network address translation (NAT), control chip-> quasi-virtual network (virtio-net), access network cable-> check. After starting the virtual machine, configure its network, and add the following code by changing the ```# vi / etc/network/ interfaces``` file: The management network interface

Auto eth0 iface eth0 inet static address 10.10.10.12 netmask 255.255.255.0

The tunnel network interface

Auto eth2 iface eth2 inet static address 10.20.20.12 netmask 255.255.255.0

The external network interface

Auto eth3 iface eth3 inet manual up ip link set dev $IFACE up down ip link set dev $IFACE down

The NAT network

Auto eth4 iface eth4 inet dhcp```

Configure named solution, change ```

Vi / etc/hostname file, change the hostname to network, change the vi / etc/ hosts``` file, and add the following code: restart the system activation configuration after 10.10.10.10 controller10.10.10.11 compute10.10.10.12 network10.10.10.13 block10.10.10.14 object110.10.10.15 object2```. # # verify the connectivity between nodes. Each node can ping each other and ping the external network. Take the controller node as an example: 1. Ping public network

$ping-c 4 www.baidu.com PING www.a.shifen.com (14.215.177.37) 56 (84) bytes of data. 64 bytes from 14.215.177.37: icmp_seq=1 ttl=54 time=6.16 ms 64 bytes from 14.215.177.37: icmp_seq=2 ttl=54 time=6.42 ms 64 bytes from 14.215.177.37: icmp_seq=3 ttl=54 time=6.12 ms 64 bytes from 14.215.177.37: icmp_seq=4 ttl=54 time=5.84 ms

-www.a.shifen.com ping statistics-4 packets transmitted, 4 received, 0 packet loss, time 3005ms rtt min/avg/max/mdev = 5.841, 6.140and 0.229ms```

Ping compute node

$ping-c 4 compute PING compute (10.10.10.11) 56 (84) bytes of data. 64 bytes from compute (10.10.10.11): icmp_seq=1 ttl=64 time=1.35 ms 64 bytes from compute (10.10.10.11): icmp_seq=2 ttl=64 time=0.936 ms 64 bytes from compute (10.10.10.11): icmp_seq=3 ttl=64 time=0.843 ms 64 bytes from compute (10.10.10.11): icmp_seq=4 ttl=64 time=1.09 ms-compute ping statistics-4 packets transmitted, 4 received, 0 packet loss Time 3002ms rtt min/avg/max/mdev = 0.843ax 1.055max 1.352 ping network 0.194ms``3.Node

$ping-c 4 network PING network (10.10.10.12) 56 (84) bytes of data. 64 bytes from network (10.10.10.12): icmp_seq=1 ttl=64 time=0.975 ms 64 bytes from network (10.10.10.12): icmp_seq=2 ttl=64 time=0.530 ms 64 bytes from network (10.10.10.12): icmp_seq=3 ttl=64 time=1.05 ms 64 bytes from network (10.10.10.12): icmp_seq=4 ttl=64 time=0.815 ms

-network ping statistics-4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 0.530, 0.844, 1.056, 0.200 ms```

Network Time Protocol (NTP)

NTP is used to synchronize services (service) between nodes. It is recommended that controller nodes refer to some more precise servers for synchronization, while other nodes refer to controller nodes for synchronization. (We recommend that you configure the controller node to reference more accurate (lower stratum) servers and other nodes to reference the controller node.)

Configuration of NTP on controller nodes

Install NTP

# apt-get install ntp``` configure NTP1. Modify the configuration file ```# vi / etc/ ntp.conf``` and add the following code:

Server NTP_SERVER iburst restrict-4 default kod notrap nomodify restrict-6 default kod notrap nomodify NTP_SERVER is replaced by controller here, and can also be set to other hostnames or servers. Other referenced server can be commented out in the configuration file if you do not need to use it. Remove the nopeer and noquery options from the restrict option. If the / var/lib/ntp/ ntp.conf.dhcp``` file exists, delete it.

Restart the NTP service: ```

Service ntp restart``configuration of NTP on other nodes

Install NTP

# apt-get install ntp``` configure NTP1. Modify the configuration file ```# vi / etc/ ntp.conf``` and add the following code:

All other server of server controller iburst are commented out. If the / var/lib/ntp/ ntp.conf.dhcp``` file exists, delete it.

Restart the NTP service: ```

Service ntp restart``` verification

Run the following command on the controller node:

# ntpq-c peers remote refid st t when poll reach delay offset jitter = localhost .step .16 l-64 0 0.000 0.000 91.189.89.199 193.79.237.14 2u 67 64 16 273.049-69.706 53.406 ``remote you can see the hostname or the IP address of multiple NTP servers. two。 Run the following command on the controller node: ntpq-c assocind assid status conf reach auth condition last_event cnt

1 21224 8011 yes no none reject mobilize 12 21225 965a yes yes none sys.peer sys_peer 5```condition column at least one server contains sys.peer.

Run the following command on other nodes:

Ntpq-c peers remote refid st t when poll reach delay offset jitter = controller 91.189.89.199 3u 23 64 0 0.000 0.000 0.000``remote this column displays the hostname of the controller node. 4. Run the following command on the other nodes: ntpq-c assoc ind assid status conf reach auth condition last_event cnt

1 57512 9024 yes yes none reject reachable 2```According to the official document, the condition column should be sys.peer, and here it shows that reject does not know why. Refer to the following two articles, which are also shown as reject,OpenStack introductory tutorials-Part2- configuration OpenStack Experimental Environment, openstack [Kilo] introduction [preparation] 2: NTP installation.

OpenStack installation package

The method is written in the notes at the beginning of this article.

Database configuration

The database is generally installed on the controller node, and MySQL is used in this paper.

Install the database

# apt-get install mariadb-server python- mysqldb``` will require you to create and enter a password for a root account during installation. two。 Modify the configuration file of the database, ```# vi / etc/mysql/ my.cnf```: in the section ```[ mysqld]```, replace the IP address corresponding to ```bind- address`` with the interface IP of controller's management network.

[mysqld]... Bind-address = 10.10.10.10 continue to add the following code in the [mysqld] ```section:

[mysqld]... Default-storage-engine = innodb innodb_file_per_table collation-server = utf8_general_ci init-connect = 'SET NAMES utf8' character-set-server = UTF8``3. Restart database service: service mysql restart```

Set database security parameters

# mysql_secure_ installation```in the step of setting, you don't have to reset the root password, and you can set the other settings as prompted. # message Service (Messaging server) this article chooses RabbitMQ as the message queuing server. Install RabbitMQ:apt-get install rabbitmq- server```

During installation, RabbitMQ automatically creates an account called guest with a password of guest. For convenience, to use the guest account in this article, you can use the following command to change the password and replace RABBIT_PASS with the appropriate password:

# rabbitmqctl change_password guest RABBIT_PASSChanging password for user "guest" .room.```for versions above 3.3.0, remote access needs to be enabled as follows: check RabbitMQ version rabbitmqctl status | grep rabbit

Status of node 'rabbit@controller'... {running_applications, [{rabbit, "RabbitMQ", "3.4.2"}, ```

Check the / etc/rabbitmq/rabbitmq.config file to make sure loopback_users corresponds to an empty list

[{rabbit, [{loopback_users, []}]}] .``` if the file does not exist, create a new one and write the above code into it. Restart the service service rabbitmq-server restart``

At this point, the basic environment configuration of OpenStack is complete.

Problems during installation and their solutions

When installing repository for OpenStack, if you use sudo to install, in the end, there will be several files that cannot be installed because of permission problems, so you need to be transferred to a root user before you can install them completely.

When configuring the virtual machine network, the virtual network card finally chooses to use virtio-net, otherwise it will be unable to connect to the external network after setting up the network. In addition, if you set the gateway of the intranet network in the network configuration file / etc/network/interfaces, you will not be able to connect to the external network.

Even if the RabbitMQ version is not higher than 3.3.0, it is best to set the / etc/rabbitmq/rabbitmq.config file according to the requirements higher than 3.3.0, otherwise it may appear in the future neturon installation process. Enter ```after the installation is complete.

The situation in which the neutron agent- list``` command is displayed as empty. Thank you for reading! This is the end of the article on "how to deploy OpenStack for juno". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report