Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Deployment of OpenStack Multi-nodes in Experimental Environment

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Preface

The OpenStack project is an open source cloud computing platform project, which is a distributed system that controls three major resources: computing, network and storage. Building such a cloud platform system can provide us with IaaS (Infrastructure as a Service) model of cloud services. The core of this article is not related to the theory, so the overall introduction of cloud computing and OpenStack concepts can refer to the following three articles:

Talking about Cloud Computing

Overview of OpenStack concepts and core components

OpenStack deployment node type and architecture

The purpose of this paper is to give a detailed experimental flow of multi-node one-click deployment of OpenStack in the experimental environment, which is a local (using yum source) deployment of R version of OpenStack. Below, the author briefly describes, practices and summarizes the four aspects of his experimental environment and required resources, system resources, deployment node planning, specific deployment, deployment summary process.

I. Experimental environment and required resources 1.1 system environment

Win10 host, using VMware15 version (can be downloaded by yourself, preferably when experimenting with this version) to install the operating system (Centos7.5)

1.2 Resource Pack

The image file of Centos7.5 and the OpenStack source of R version. The resource links are as follows:

Link: https://pan.baidu.com/s/1hFENGyrRTz3lOLUAordTGg

Extraction code: mu5x

II. System resources

The system resources mainly introduce the author's host hardware, mainly considering that the OpenStack project still takes up a lot of resources to avoid unexpected failures in the process of experimental deployment. Of course, the system resources here are only the case of the author's notebook, and the specific hardware resources still need to be tried many times.

The hardware resources used in the experiment are as follows:

CPU:i7 9 generation (i7 is enough, mainly depends on the number of core threads); memory: 32G (standard, can be lower preferably not less than 24g); hard disk: 1TSSD solid state (preferably more than 200g of available disk space, the author gives 300g in later deployment) the main hardware resources are these three.

The following is a description of the author's experimental deployment of the node planning, the node type is introduced in the link article given above, I will not repeat it here.

III. Deployment node planning

Considering the hardware configuration of the experimental environment, it is impossible to deploy many nodes like the production environment, so it is planned into three nodes, one control node and two computing nodes as a whole. Familiarize yourself with the architecture diagram again:

Limited resources, experimental deployment can only deploy the network on the control node, production environment can not be such a deployment ha! On the one hand, the experimental deployment is to deepen the theoretical understanding, on the other hand, it is convenient to be familiar with some deployment processes and command operations, as well as some troubleshooting ideas.

Since it comes to the deployment of a production environment, let's give a rough example:

Assuming that you deploy an OpenStack platform service with 300 servers, you can roughly plan like this:

30 control nodes; 30 network nodes; 100 compute nodes; the rest can be stored

Speaking of storage, we know that there are Cinder block storage and Swift object storage in OpenStack. Another large project, CEPH distributed storage, is usually used in the production environment. Generally, we will deploy OpenStack storage nodes in combination with this storage method. And in the production environment, CEPH uses highly available clusters to ensure high reliability and high availability of stored data. Interested friends can check out the knowledge about CEPH.

Here is the specific allocation of resources:

Control node: the total number of processor cores is 2x2; memory is 8G; two disks are divided: 300g and 1024g (later for ceph storage experiment); dual network cards, a host-only mode (eth) (ip is planned for 192.168.100.20) and a NAT mode (ip is planned for 20.0.20)

Compute nodes: the resource allocation of the two computing nodes is the same, with a total number of processor cores of 2 to 2; memory is 8 GB; two disks are divided into 300g and 1024G respectively; and the network card is in a host-only mode (eth) (IP address planning is 192.168.100.21 and 192.168.100.22)

The above figure also shows the components to be installed on each node, but the author still considers some simplification to facilitate the experiment, so I choose some components. The following is to understand the charm of OpenStack through the specific deployment process.

IV. Specific deployment process

The author divides the OpenStack experiment of deploying R version with one click into the following processes. Generally, the probability of failure or other situations in the deployment process is still very high. Some troubleshooting ideas are given at the end of the article for your reference:

1. Install operating system 2, system environment configuration 3, and deploy OpenStack with one click

The following steps are broken down and demonstrated for each step. During the deployment, you can define the IP address of the network segment for some network configurations.

4.1 install the operating system

As mentioned above, the experimental environment deploys one control and two computing nodes. Therefore, three virtual machines need to be installed. The following is the specific installation process.

1. Modify the local VMnet8 network card

The following is the order of operation

The following is the result of the change:

two。 Create a new virtual machine (the virtual machine is not enabled here for now)

The specific process of installing Linux system Centos7 has been described in detail in the author's previous article, here will be some different places through the following illustration to explain. Reference link: Centos7 operating system installation

The virtual machine settings of the control node are shown in the following figure:

The virtual machine settings for the compute node are shown below (both nodes are the same):

3. After the above process is set up, open the configuration and install the virtual machine (it is better to install one by one, and the setting process of the three nodes is the same, which is explained by any one of the nodes)

After it is enabled, the operation is described as shown below:

4. When installing, you only need to select the minimum installation, and then plan the disk according to the following figure

Click the dialog box after disk allocation to allocate disks

After clicking Done, the following dialog box appears to continue with the configuration

The screenshot of the above steps that do not give the corresponding steps is consistent with the steps in the previous link to install the system, and after the setting is finished, the following actions are consistent with the normal installation system. Finally, you can log in to the normal installation, and then turn it off (to avoid the failure of virtual machine installation on other nodes due to resource consumption, taking into account everyone's hardware configuration problems).

This is the whole process of our first step, which may seem like a lot, but when you are very familiar with the process of installing the Linux operating system on VMware, you will find it very simple, and the most important thing is not to forget the two commands before installation.

When there is no problem with the installation, we can start the three virtual machines one by one (preferably one by one) and start the second step.

4.2 system environment configuration

Here is a list of the main steps that need to be completed in the configuration of the system environment.

1. Configure the hostname and network card of each node, restart the network 2, turn off the firewall, core protection, network management, and set to disable boot 3, upload the software package-- openstack-rocky package (source), and decompress it. 4, configure the local yum source file 5, do not interact with three nodes and verify 6, configure time synchronization

Let's start the configuration.

1. Configure the hostname and network card of each node, and restart the network (here, we first configure remote connection tools such as network convenient connection Xshell locally, on the one hand, simulate the production environment as much as possible, on the other hand, facilitate code demonstration.) take a look at the network card settings.

Control node configuration:

[root@localhost ~] # hostnamectl set-hostname ct [root@localhost ~] # su [root@ct ~] # cd / etc/sysconfig/network-scripts/# configure local network card eth0 and nat network card eth2 [root@ct network-scripts] # cat ifcfg-eth0TYPE=EthernetPROXY_METHOD=noneBROWSER_ONLY=noBOOTPROTO=staticDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=eth0UUID=6dc229bf-8b5b-4170-ac0d-6577b4084fc0DEVICE=eth0ONBOOT=yesIPADDR=192.168.100.20NETMASK=255.255.255.0GATEWAY=192.168.100.1 [root@ct network-scripts ] # cat ifcfg-eth2TYPE=EthernetPROXY_METHOD=noneBROWSER_ONLY=noBOOTPROTO=staticDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=eth2UUID=37e4a752-3820-4d15-89ab-6f3ad7037e84DEVICE=eth2ONBOOT=yesIPADDR=20.0.0.20NETMASK=255.255.255.0GATEWAY=20.0.0.2# configure the resolv.conf file to access the public network [root@ct network-scripts] # cat / etc/resolv.confnameserver 8.8.8.restart the network Test [root@ct ~] # ping www.baidu.comPING www.wshifen.com (104.193.88.123) 56 (84) bytes of data.64 bytes from 104.193.88.123 (104.193.88.123): icmp_seq=1 ttl=128 time=182 ms64 bytes from 104.193.88.123 (104.193.88.123): icmp_seq=2 ttl=128 time=182 Ms ^ C-www.wshifen.com ping statistics-2 packets transmitted, 2 received, 0% packet loss Time 1002msrtt min/avg/max/mdev = 182.853 ms 182.863 ms

Compute node Nic configuration: (all are the same except ip address)

[root@localhost ~] # hostnamectl set-hostname C1 [root@localhost ~] # su [root@c1 ~] # cat / etc/sysconfig/network-scripts/ifcfg-eth0 TYPE=EthernetPROXY_METHOD=noneBROWSER_ONLY=noBOOTPROTO=staticDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=eth0UUID=d8f1837b-ce71-4465-8d6f-97668c343c6aDEVICE=eth0ONBOOT=yesIPADDR=192.168.100.21NETMASK=255.255.255.0GATEWAY=192.168.100.1# computer Node 2 configures ip address 192.168.100.22

Configure the / etc/hosts file on the three nodes:

Cat / etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.100.20 ct192.168.100.21 c1192.168.100.22 test whether you can ping each other root@ct ~] # ping c1PING C1 (192.168.100.21) 56 (84) bytes of data.64 bytes from C1 (192.168.100.21): icmp_seq=1 ttl=64 time=0.800 ms64 bytes from C1 (192 .168.100.21): icmp_seq=2 ttl=64 time=0.353 Ms ^ C-C1 ping statistics-2 packets transmitted 2 received, 0% packet loss, time 1001msrtt min/avg/max/mdev = 0.353 bytes of data.64 bytes from 0.576, 0% ping statistics, time 1001msrtt min/avg/max/mdev = 0.224 ms [root@ct ~] # ping c2PING c2 (192.168.100.22) 56 (84) bytes of data.64 bytes from c2 (192.168.100.22): icmp_seq=1 ttl=64 time=0.766 ms64 bytes from c2 (192.168.100.22): icmp_seq=2 ttl=64 time=0.316 Ms ^ C-c2 ping statistics-- 2 packets transmitted, 2 received, 0% packet loss Time 1000msrtt min/avg/max/mdev = 0.316 bytes of data.64 bytes from 0.541 ms [root@c1 ~] # ping c2PING c2 (192.168.100.22) 56 (84) bytes of data.64 bytes from c2 (192.168.100.22): icmp_seq=1 ttl=64 time=1.25 ms64 bytes from c2 (192.168.100.22): icmp_seq=2 ttl=64 time=1.05 ms64 bytes from c2 (192.168.100.22): icmp_seq=3 ttl=64 time=0.231 Ms ^ C-- c2 ping statistics-3 packets transmitted 3 received, 0% packet loss, time 2004msrtt min/avg/max/mdev = 0.231 ms 0.846 ms

2. Disable firewall, core defense, network management, and set to disable self-boot (all three nodes need to be configured with the following command. Here, try to check these services before using OpenStack in the experimental environment)

Systemctl stop firewalldsystemctl disable firewalldsetenforce 0vi / etc/sysconfig/selinuxSELINUX=disabledsystemctl stop NetworkManagersystemctl disable NetworkManager

3. Upload the software package-openstack-rocky package (source), and decompress it and other settings

The author uses the xftp tool to upload all three nodes, and then decompress them to the / opt directory.

As shown below

[root@ct ~] # lsanaconda-ks.cfg openstack_ rocky.tar.gz [root @ ct ~] # tar-zxf openstack_rocky.tar.gz-C / opt/ [root@ct ~] # cd / opt/ [root@ct opt] # lsopenstack_ rocky [root @ ct opt] # du-h2.4M. / openstack_rocky/repodata306M. / openstack_rocky306M.

4. Configure the local yum source file (pay attention to keep the virtual machine image file in the connected state, view it in the virtual machine settings, or check whether the optical drive icon in the lower right corner has a green dot. Generally, the connection status is the default.) it is demonstrated here on the control node, and the same operation can be done on the other nodes.

4.1. Mount the system image

[root@ct opt] # cat / etc/fstab# # / etc/fstab# Created by anaconda on Fri Mar 6 05:02:52 2020 Accessible filesystems, by reference, are maintained under'/ dev/disk'# See man pages fstab (5), findfs (8) Mount (8) and/or blkid (8) for more info#UUID=0d4b2a40-756a-4c83-a520-83289e8d50ca / xfs defaults 0 0UUID=bd59f052-d9bc-47e8-a0fb-55b701b5dd28 / boot xfs defaults 0 0UUID=8ad9f9e7-92db-4aa2-a93d-1fe93b63bd89 swap swap defaults 0 0/dev/sr0 / mnt iso9660 defaults 0 [root@ct opt] # Mount-amount: / dev/sr0 is write-protected Mounting read-only [root@ct opt] # df-hTFilesystem Type Size Used Avail Use% Mounted on/dev/sda3 xfs 291G 1.6G 290G 1% / devtmpfs devtmpfs 3.9G 0 3.9G 0% / devtmpfs tmpfs 3.9G 03.9G 0% / dev/shmtmpfs tmpfs 3.9G 12M 3.8G 1% / runtmpfs tmpfs 3. 9G 03.9G 0% / sys/fs/cgroup/dev/sda1 xfs 1014M 134M 881M 14% / boottmpfs tmpfs 781M 0781M 0% / run/user/0/dev/sr0 iso9660 4.2G 4.2G 0% / mnt

4.2.Create yum source backup and write new source files

[root@ct opt] # cd / etc/yum.repos.d/ [root@ct yum.repos.d] # lsCentOS-Base.repo CentOS-Debuginfo.repo CentOS-Media.repo CentOS-Vault.repoCentOS-CR.repo CentOS-fasttrack.repo CentOS-Sources.repo [root@ct yum.repos.d] # mkdir backup [root@ct yum.repos.d] # mv C* backup/ [root@ct yum.repos.d] # vi local.repo [root@ct yum.repos.d] # cat Local.repo [openstack] name=openstackbaseurl= file:///opt/openstack_rocky # this path is the path to the source of the decompressed package gpgcheck=0enabled= 1 [centos] name=centosbaseurl= file:///mntgpgcheck=0enabled=1

Modify the yum.conf file and set keepcache to 1, which means to save the cache

[root@ct yum.repos.d] # head-10 / etc/yum.conf [main] cachedir=/var/cache/yum/$basearch/$releaseverkeepcache=1 # just need to modify this parameter debuglevel=2logfile=/var/log/yum.logexactarch=1obsoletes=1gpgcheck=1plugins=1installonly_limit= 5 [root @ ct yum.repos.d] # yum clean all # clear all packages Loaded plugins: fastestmirrorCleaning repos: centos openstackCleaning up everythingMaybe you want: rm-rf / var/cache/yum To also free up space taken by orphaned data from disabled or removed repos [root@ct yum.repos.d] # yum makecache # create the local cache of the package Loaded plugins: fastestmirrorDetermining fastest mirrorscentos | 3.6 kB 00:00:00 openstack | | 2.9 kB 00:00:00 (1amp 7): centos/group_gz | 166kB 00:00:00 (2max 7): centos/filelists_db | | | 3. MB 00:00:01 (3 MB 7): centos/primary_db | 3. 1 MB 00:00:01 (4 hand 7): centos/other_db | | | 1.3 MB 00:00:00 (5 kB 7): openstack/primary_db | 505 kB 00:00:00 (6 kB 7): openstack/filelists_db | | | 634 kB 00:00:00 (7 kB 7): openstack/other_db | 270 kB 00:00:00 Metadata Cache Created |

5. There is no interaction between the three nodes and verification.

Ssh-keygen-t rsa # can enter all the way. The following interaction is to enter the password of yes and the root of the logged in virtual machine to ssh-copy-id ct ssh-copy-id C1 ssh-copy-id c2.

In this way, in order to ensure the security of the experiment and verify the previous settings, we first take a snapshot and then restart the virtual machine to verify these configurations (the following verification needs to be done on each node, here take the control node as an example)

[root@ct ~] # lsanaconda-ks.cfg openstack_ rocky.tar.gz [root @ ct ~] # systemctl status firewalld ● firewalld.service-firewalld-dynamic firewall daemon Loaded: loaded (/ usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld (1) [root@ct ~] # systemctl status NetworkManager ● NetworkManager.service-NetworkManager Loaded: loaded (/ usr/lib/systemd/system/NetworkManager.service; disabled Vendor preset: enabled) Active: inactive (dead) Docs: man:NetworkManager (8) [root@ct ~] # setenforce? setenforce: SELinux is disabled# reconfirm whether interaction-free is successful [root@ct ~] # ssh c1Last login: Sun Mar 8 13:11:32 2020 from c2 [root@c1 ~] # exitlogoutConnection to C1 closed. [root@ct ~] # ssh c2Last login: Sun Mar 8 13:14:18 2020 from gateway [root@c2 ~] #

6. Configure time synchronization

This step is critical, especially in our production environment, where it is assumed that if the time between servers cannot be synchronized, it will not be possible for many services and businesses, and may even lead to major accidents.

In this experimental environment, the clock server of Ali cloud is synchronized as an example, the control node synchronizes the Ali cloud server, and the two computing nodes synchronously control the node time through ntpd services.

Control node configuration:

[root@ct] # yum-y install ntpdate Loaded plugins: fastestmirrorLoading mirror speeds from cached hostfileResolving Dependencies-- > Running transaction check--- > Package ntpdate.x86_64 0:4.2.6p5-28.el7.centos will be installed-- > Finished Dependency Resolution//...// omit part Installed: ntpdate.x86_64 0:4.2.6p5-28.el7.centos Complete # synchronize Ali Cloud clock Server [root@ct] # ntpdate ntp.aliyun.com 8 Mar 05:20:32 ntpdate [9596]: adjust time server 203.107.6.88 offset 0.017557 sec [root@ct ~] # dateSun Mar 8 05:20:40 EDT 2020 [root@ct] # yum-y install ntpLoaded plugins: fastestmirrorLoading mirror speeds from cached hostfileResolving Dependencies-- > Running transaction check--- > Package ntp.x86_ 64 0:4.2.6p5-28.el7.centos will be installed-- > Processing Dependency: libopts.so.25 () (64bit) for package: ntp-4.2.6p5-28.el7.centos.x8654 Finished Dependency ResolutionDependencies Resolved= Package Arch Version-> Running transaction check--- > Package autogen-libopts.x86_64 0lug 5.18-5.el7 will be installed-- > Finished Dependency ResolutionDependencies Resolved= Package Arch Version Repository Size=Installing: ntp x86 / 64 4.2.6p5-28.el7.centos centos 549 kInstalling for dependencies: autogen-libopts x86 / 64 5.18-5.el7 centos 66 kTransaction Summary=Install 1 Package (+ 1 Dependent package) Total download size: 615 kInstalled size: 1.5 MDownloading packages:- -- Total 121MB/s | 615 kB 00:00:00 Running transaction checkRunning transaction testTransaction test succeededRunning transaction Installing: autogen-libopts-5.18-5.el7.x86_64 1 Installing 2: ntp-4.2.6p5-28.el7.centos.x86_64 2 Verifying: autogen-libopts-5.18-5.el7.x86_64 1 Verifying 2: ntp-4.2.6p5-28.el7.centos.x86_64 2 Installed: ntp.x86_64 0:4.2.6p5-28.el7.centos Dependency Installed: autogen-libopts.x86_64 0virtual 5.18-5.el7 Complete!

Modify the ntp main configuration file

Restart the service after saving the file and shut down the chronyd.service service

[root@ct ~] # systemctl disable chronyd.serviceRemoved symlink / etc/systemd/system/multi-user.target.wants/chronyd.service. [root@ct ~] # systemctl restart ntpd [root@ct ~] # systemctl enable ntpd

Configuration on two compute nodes

[root@c1 ~] # yum-y install ntpdate Loaded plugins: fastestmirrorLoading mirror speeds from cached hostfileResolving Dependencies-- > Running transaction check--- > Package ntpdate.x86_64 0:4.2.6p5-28.el7.centos will be installed-- > Finished Dependency ResolutionDependencies Resolved= Package Arch Version Repository Size=Installing: ntpdate x86 / 64 4.2.6p5-28.el7.centos centos 86 kTransaction Summary=Install 1 PackageTotal download size: 86 kInstalled size: 121 kDownloading packages:Running transaction checkRunning transaction testTransaction test succeededRunning transaction Installing: ntpdate-4.2.6p5-28.el7.centos.x86_64 1 28.el7.centos.x86_64 1 Verifying : ntpdate-4.2.6p5-28.el7.centos.x86_64 1 Installed: ntpdate.x86_64 0:4.2.6p5-28.el7.centos Complete! [root @ C1 ~] # ntpdate ct 8 Mar 05:36:26 ntpdate [9562]: step time server 192.168.100.20 offset-28798.160949 sec [root@c1 ~] # crontab-e # Save out after writing to a periodic scheduled task For example: * / 30 * / usr/sbin/ntpdate ct > > / var/log/ntpdate.logno crontab for root-using an empty onecrontab: deploy OpenStack with one click of installing new crontab4.3

Operate at the control node

# install openstack-packstack tool to generate openstack response file (txt text format) [root@ct ~] # yum install-y openstack-packstack [root@ct ~] # packstack-gen-answer-file=openstack.txt [root@ct ~] # lsanaconda-ks.cfg openstack_rocky.tar.gz openstack.txt

The point is how to modify it: it is not specified here. After reading this article, the next article will introduce the configuration parameters of the answer file in detail.

Here are the lines that need to be changed. Modify them carefully.

41 line: Ymurn 50 line: Ymurn 97 line: 192.168.100.11192.168.100.12557 line: 20G817: physnet1862: physnet1:br-ex873:br-ex:eth21185:y-n # there are still some network segments that need to be modified and passwords. Here use sed regular expression to globally modify [root@ct ~] # sed-I-r's / (. + _ PW) =. + /\ 1accounsf144069base 'openstack.txt [root@ct ~] # sed- I-r's openstack.txt 20.0.0.20max 192.168.100.20max

Command for one-click deployment and installation

[root@ct ~] # packstack-- answer-file=openstack.txtWelcome to the Packstack setup utilityThe installation log file is available at: / var/tmp/packstack/20200308-055746-HD3Zl3/openstack-setup.logInstalling:Clean Up [DONE] Discovering ip protocol version [DONE] Setting up ssh keys [DONE] Preparing servers [DONE] Pre installing Puppet and discovering hosts' details [DONE] Preparing pre-install entries [DONE] Setting up CACERT [DONE] Preparing AMQP entries [DONE] Preparing MariaDB entries [DONE] Fixing Keystone LDAP config parameters To be undef if empty [DONE] Preparing Keystone entries [DONE]... / / omit part of the content

At each node terminal (the xshell terminal opens a terminal that connects to the control node and uses the following command to view the log information)

Tail-f / var/log/messages

When the situation shown in the following picture shows that there is no problem at present, the next step is to wait patiently

The following figure shows that the deployment is successful

We can use the browser (Google) to log in to the dashboard to verify that we can refer to the introduction at the end of the following article:

Introduction to OpenStack-Theory part (2): node types and architecture of OpenStack (sample dashboard interface with login)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report