Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

OpenStack architecture-keystone components (1)

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

This blog mainly builds keystone components in OpenStack architecture, and then brings the management operations of glance, nova, neutron, horizon, cinder and virtual machine in OpenStack in turn. Before you deploy the experiment, learn about OpenStack as follows!

What is OpenStack?

OpenStack is not only a community, but also a project and open source software that provides open source software to build public and private clouds. It provides an operating platform or tool set for cloud deployment. Its purpose is to help organizations run clouds for virtual computing or storage services, and to provide scalable and flexible cloud computing for public and private clouds as well as large and small clouds.

OpenStackd open source projects are maintained by the community and include collections of OpenStack computing (codenamed Nova), OpenStack object storage (codenamed Swift), and OpenStack mirroring services (codenamed Glance). OpenStack provides an operating platform, or toolkit, for orchestrating clouds.

OpenStack components:

OpenStack currently has three main components: computing, storage, and mirroring.

OpenStack computing is a cloud controller that is used to launch a virtual instance of a user or group, and it is also used to configure the networking of multiple instances in each instance or project for a particular project. OpenStack object storage is a system that stores objects in a high-capacity system with built-in redundancy and fault tolerance. Object storage has a variety of applications, such as backing up or archiving data, storing graphics or video, storing secondary or tertiary static data, and developing new applications that integrate with data storage.   OpenStack Mirror Service is a search and virtual machine image retrieval system. It can be configured in three ways: using OpenStack object storage to store images, using Amazon S3 direct storage, or using S3 object storage as S3 to access intermediate storage.

OpenStack composition

The whole OpenStack is composed of four parts: control node, computing node, network node and storage node.

Control node: responsible for controlling the rest of the nodes, including virtual machine establishment, migration, network allocation, storage allocation, etc.

Compute node: responsible for virtual machine operation

Network node: responsible for communication between the external network and the internal network

Storage node: responsible for additional storage management of virtual machines, etc.

Control node architecture

The control node includes the following services:

Management support service basic management service extension management service

1) the management support service includes two services: MySQL and Qpid

MySQL: where the data generated by the database as a base / extension service is stored

Qpid: message broker (also known as message middleware) provides a unified message communication service for other services.

2) the basic management service consists of five Keystone,Glance,Nova,Neutron,Horizon services

Keystone: authentication management service that provides the management, creation, modification, etc., of authentication information / tokens for all remaining components, using MySQL as a unified database

Glance: image management service, which provides the management of the images that can be provided when the virtual machine is deployed, including the import and format of images, and the creation of corresponding templates.

Nova: computing management service, which provides the management of the Nova of computing nodes and uses Nova-API for communication

Neutron: network management service, which provides network topology management for network nodes, as well as Neutron management panel in Horizon

Horizon: console service, which provides management of all services on all nodes in the form of Web, which is commonly referred to as DashBoard

3) the extended management service includes five Cinder,Swift,Trove,Heat,Centimeter services

Cinder: provides Cinder correlation for managing storage nodes, as well as a management panel for Cinder in Horizon

Swift: provides Swift correlation for managing storage nodes, as well as a management panel for Swift in Horizon

Trove: provides Trove correlation for managing database nodes, as well as the management panel for Trove in Horizon

Heat: template-based resource initialization, dependency processing, deployment and other basic operations in cloud environment are provided. Advanced features such as automatic shrinkage and load balancing can also be solved.

Centimeter: provide the monitoring of physical resources and virtual resources, record these data, analyze the data, and trigger the corresponding actions under certain conditions. Experimental environment:

This lab requires three virtual machines, namely, control node (including mirroring services), compute node and storage node. It is recommended that the three virtual machines be configured with 2 CPU with 4G memory.

Host system IP address roles controllerCentOS7192.168.37.128keystone, ntp, mariadb, rabbitmq, memcached, etcd, apachecomputeCentOS7192.168.37.130nova, ntpcinderCentOS7192.168.37.131cinder, ntp Experimental process: first, environment preparation (three virtual machines)

1. Turn off the firewall and turn off selinux

Systemctl stop firewalld.service

Setenforce 0

2. Modify the host name separately

Hostnamectl set-hostname controller # Control Node

Bash

Hostnamectl set-hostname compute # Compute Node

Bash

Hostnamectl set-hostname cinder # Storage Node

Bash

3. Modify hosts file

Vim / etc/hosts

192.168.37.128 controller

192.168.37.130 compute

192.168.37.131 cinder

4. Node interworking test

Ping-c 4 openstack.org # sends 4 packages to test the official website Unicom

Ping-c 4 compute

Ping-c 4 openstack.org # Compute Node Test

Ping-c 4 controller

Ping-c 4 openstack.org # storage node test

Ping-c 4 controller

5. Back up the default yum source

Mv / etc/yum.repos.d/CentOS-Base.repo / etc/yum.repos.d/CentOS-Base.repo.backup

6. Download the latest yum source

Wget-O / etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

7. Install the required openstack software package

Yum install centos-release-openstack-queens-y

Yum upgrade-y # updates the software repository

Yum install python-openstackclient-y

Yum install openstack-selinux-y

Configure NTP clock service

# # controller Node # #

1. Yum installs chrony software package

Yum install chrony-y

2. Modify chrony configuration file

Insert server controller iburst # at the beginning of the vim / etc/chrony.conf # file to set the time synchronization network segment for all nodes of the time source to synchronize the time allow 192.168.37.0 and 24 # to the controller node.

3. Enable NTP service

Systemctl enable chronyd

Systemctl stop chronyd

Systemctl start chronyd

# since the chrony service itself starts on its own, it needs to be shut down and restarted.

# # configuration of other nodes # #

1. Yum installs chrony software package

Yum install chrony-y

2. Modify chrony configuration file

Vim / etc/chrony.conf server controller iburst # synchronous controller

3. Start the service

Systemctl stop chronyd

Systemctl start chronyd

4. Verify clock synchronization service on controller

Chronyc sources

3. Database deployment (controller node)

1. Yum install mariadb

Yum install mariadb mariadb-server python2-PyMySQL-y

2. Modify mariadb configuration file

Vim / etc/my.cnf.d/mariadb-server.cnf [mysqld] datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.socklog-error=/var/log/mariadb/mariadb.logpid-file=/var/run/mariadb/mariadb.pid # the following is what's new bind-address = 192.168.37.128 # binding address controllerdefault-storage-engine = innodb # default storage engine innodb_file_per_table = on # Independent tablespace max_connections = 4096 # maximum concatenation collation-server = utf8_general_ci # character set setting character-set-server = utf8

3. Enable the mariadb service and set it to enable self-startup.

Systemctl enable mariadb.service

Systemctl start mariadb.service

4. Basic database settings

Mysql_secure_installation

# basic settings, enter all except setting password as abc123

4. Rabbitmq service deployment (controller node)

1. Yum installs rabbitmq-server package

Yum install rabbitmq-server-y

2. Enable the rabbitmq service and set it to enable self-startup.

Systemctl enable rabbitmq-server.service

Systemctl start rabbitmq-server.service

3. Add users and permissions after restarting the service

Rabbitmqctl add_user openstack 123456 # add user

Rabbitmqctl set_permissions openstack ".

5. Memcached service deployment (controller node)

1. Yum installs memcached package

Yum install memcached python-memcached-y

2. Modify memcached configuration file

Vim / etc/sysconfig/memcachedPORT= "11211" USER= "memcached" MAXCONN= "1024" CACHESIZE= "64" OPTIONS= "- l 192.168.37.128 Magazine 1" # modify the listening IP address

3. Enable the memcached service and set it to enable self-startup.

Systemctl enable memcached.service

Systemctl start memcached.service

6. ETCD service discovery mechanism deployment (controller node) etcd is a highly available distributed key value (key-value) database for service discovery. Service discovery (ServiceDiscovery) is one of the most common problems in distributed systems, that is, how can processes or services in the same distributed cluster find each other and establish connections

1. Install the etcd package for yum

Yum install etcd-y

2. Modify the etcd configuration file. The result is as follows:

ETCD_INITIAL_CLUSTER # enables clustering: matches all url addresses in the cluster (public, admin, Internal) ETCD_INITIAL_ADVERTISE_PEER_URLSETCD_ADVERTISE_CLIENT_URLSETCD_LISTEN_CLIENT_ URLs [member] ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" # File location ETCD_LISTEN_PEER_URLS= "http://192.168.37.128:2380" # listener cluster server address ETCD_LISTEN_CLIENT_URLS=" http://192.168.37.128:2379" # declaration Client address ETCD_NAME= "controller" [Clustering] # matches cluster address ETCD_INITIAL_ADVERTISE_PEER_URLS= "http://192.168.37.128:2380" # Control side address ETCD_ADVERTISE_CLIENT_URLS=" http://192.168.37.128:2379" # client address ETCD_INITIAL_CLUSTER= "controller= http://192. 168.37.128 ETCD_INITIAL_CLUSTER_STATE= 2380 "# Cluster name setting ETCD_INITIAL_CLUSTER_TOKEN=" etcd-cluster-01 "# token setting ETCD_INITIAL_CLUSTER_STATE=" new "

3. Enable the etcd service and set it to start automatically.

Systemctl enable etcd.service

Systemctl start etcd.service

7. Keystone authentication (controller node)

1. Create a separate database keystone, declare the user and authorize

Mysql-uroot-p # password abc123

Create database keystone

Grant all privileges on keystone. To 'keystone'@'localhost' identified by' 123456 license; # Local user Authorization

Grant all privileges on keystone. To 'keystone'@'%' identified by' 123456'

Flush privileges; # other user authorization

2. Yum installation package

Yum install openstack-keystone httpd mod_wsgi-y

3. Edit the keystone configuration file

Vim / etc/keystone/keystone.conf [database] # 737 Line connection = mysql+pymysql://keystone:123456@controller/keystone [token] # 2922 Line provider = fernet # secure messaging algorithm

4. Synchronize the database

Su-s / bin/sh-c "keystone-manage db_sync" keystone

5. Initialize the database

Keystone-manage fernet_setup-- keystone-user keystone--keystone-group keystone

Keystone-manage credential_setup-- keystone-user keystone--keystone-group keystone

6. Set a password for the administrator and register three access methods

Keystone-manage bootstrap--bootstrap-password 123456\

-- bootstrap-admin-url http://controller:35357/v3/\

-- bootstrap-internal-url http://controller:5000/v3/\

-- bootstrap-public-url http://controller:5000/v3/\

-- bootstrap-region-id RegionOne

VIII. Apache service deployment

1. Edit the httpd configuration file

Vim / etc/httpd/conf/httpd.confServerName controller

2. Establish a soft connection to make apache recognize keystone.

Ln-s / usr/share/keystone/wsgi-keystone.conf / etc/httpd/conf.d/

3. Enable the Apache service and set it to start automatically.

Systemctl enable httpd.service

Systemctl start httpd.service

4. Declare environment variables

Export OS_USERNAME=admin

Export OS_PASSWORD=123456

Export OS_PROJECT_NAME=admin

Export OS_USER_DOMAIN_NAME=Default

Export OS_PROJECT_DOMAIN_NAME=Default

Export OS_AUTH_URL= http://controller:35357/v3

Export OS_IDENTITY_API_VERSION=3

IX. Create demo platform management

1. Create a domain Domain

Openstack domain create-description "Domain" example

2. Create a project Service Project

Openstack project create-domain default-description "Service Project" service

3. Create a platform demo project

Openstack project create-domain default-description "Demo Project" demo

4. Create demo users

Openstack user create-domain default-password-prompt demo

# enter password: 123456

5. Create a user role

Openstack role create user

6. Add user roles to demo projects and users

Openstack role add-project demo-user demo user

Verify the operation of keystone

1. Cancel the environment variable

Unset OS_AUTH_URL OS_PASSWORD

2. Authentication token returned by admin user

Openstack-- os-auth-url http://controller:35357/v3\

-- os-project-domain-name Default-- os-user-domain-name Default\

-os-project-name admin-os-username admin token issue

# password: 123456

3. Authentication token returned by demo user

Openstack-- os-auth-url http://controller:5000/v3\

-- os-project-domain-name Default-- os-user-domain-name Default\

-os-project-name demo-os-username demo token issue

# password: 123456

4. Create an admin-openrc script

Vim admin-openrcexport OS_PROJECT_DOMAIN_NAME=Defaultexport OS_USER_DOMAIN_NAME=Defaultexport OS_PROJECT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=123456export OS_AUTH_URL= http://controller:5000/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2

5. Create a demo-openrc script

Vim demo-openrcexport OS_PROJECT_DOMAIN_NAME=Defaultexport OS_USER_DOMAIN_NAME=Defaultexport OS_PROJECT_NAME=demoexport OS_USERNAME=demoexport OS_PASSWORD=123456export OS_AUTH_URL= http://controller:5000/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2

6. Use script to return the authentication token

Source ~ / admin-openrc

Openstack token issue

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report