In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Openstack platform building how to log on to the platform interface, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain in detail for you, people with this need can come to learn, I hope you can gain something.
Environmental preparation:
RHEL6.5 memory 4G hard disk 70G
The formulation of yum sources (three yum sources are established respectively, the first three of which are local customized yum sources)
Baseurl= ftp://instructor.example.com/pub/rhel6.5/Server
Baseurl= ftp://instructor.example.com/pub/errata
Baseurl= http://instructor.example.com/pub/OpenStack/
Baseurl= ftp://ftp.redhat.com/pub/redhat/linux/enterprise/$releasever/en/os/SRPMS/
Baseurl= ftp://ftp.redhat.com/pub/redhat/linux/beta/$releasever/en/os/SRPMS/
# yum update-y; reboot
Section 1: installation of qpid communication nodes
two。 Install qpid
[root@server10 ~] # yum install-y qpid-cpp-server qpid-cpp-server-ssl cyrus-sasl-md5
3. Create user-f password file-u define user information and db information
[root@server10] # saslpasswd2-f / var/lib/qpidd/qpidd.sasldb-u QPID qpidauth
[root@server10] # sasldblistusers2-f / var/lib/qpidd/qpidd.sasldb
Qpidauth@QPID: userPassword
4. Create authentication file acl (does not exist by default)
[root@server10 ~] # echo 'acl allow qpidauth@QPID all all' > / etc/qpid/qpidauth.acl
5. Read acl file background process qpidd
[root@server10 ~] # echo "QPIDD_OPTIONS='--acl-file / etc/qpid/qpidauth.acl'" > > / etc/sysconfig/qpidd
6. Modify permissions
Root@server10 ~] # chown qpidd / etc/qpid/qpidauth.acl
[root@server10 ~] # chmod 600 / etc/qpid/qpidauth.acl
7. Modify / etc/qpidd.conf
Cluster-mechanism=DIGEST-MD5
Auth=yes
8. Create a stand-alone pki (directory permissions)
[root@server10 ~] # mkdir / etc/pki/tls/qpid
[root@server10 ~] # chmod 700 / etc/pki/tls/qpid/
[root@server10 ~] # chown qpidd / etc/pki/tls/qpid/
View qpidd users (already created when the software is installed)
9. Add passwords and permissions
[root@server10 ~] # echo westos > / etc/qpid/qpid.pass
[root@server10 ~] # chmod 600 / etc/qpid/qpid.pass
[root@server10 ~] # chown qpidd / etc/qpid/qpid.pass
10. Generate certification certificate (certutil)
[root@server10 ~] # echo $HOSTNAME
Server10.example.com
[root@server10] # certutil-N-d / etc/pki/tls/qpid/-f / etc/qpid/qpid.pass
Pay attention to the files generated in the / etc/pki/tls/qpid/ directory (no permissions)
11 establish the encryption sequence-n determine the full hostname (pay attention to the permissions to generate files)
[root@server10] # certutil-S-d / etc/pki/tls/qpid/-n server10.example.com-s "CN=server10.example.com"-t "CT,"-x-f / etc/qpid/qpid.pass-z / usr/bin/certutil
[root@server10 tls] # chown-R qpidd qpid/
twelve。 Define the relevant db,name and pd in the main configuration file
(view the log after starting qpidd)
[root@server10 ~] # vim / etc/qpidd.conf
Ssl-cert-db=/etc/pki/tls/qpid
Ssl-cert-name=server10.example.com
Ssl-cert-password-file=/etc/qpid/qpid.pass
Require-encryption=yes
[root@server10 ~] # / etc/init.d/qpidd restart
[root@server10] # tail-f / var/log/messages shows that it is running (h seems to have a wrong password)
[root@server10 ~] # chkconfig qpidd on
#
Section 2: identification system: identity (Global use)
1. Install keystone and related
[root@server10 ~] # yum install openstack-keystone openstack-selinux openstack-utils-y
two。 Initial message service, import the db file into the database (mysql' service is not installed by default)
[root@server10 ~] # openstack-db-- init-- service keystone requires mysql service to be installed and configured
3. Create keystone users and passwords (in fact, import files into the database, or you can import them yourself using openssl)
[root@server10] # keystone-manage pki_setup-- keystone-user keystone--keystone-group keystone
4. Modify the permissions of the corresponding ssl (/ etc/keystone/ssl)
[root@server10] # chown-R keystone:keystone / etc/keystone/ssl/
5. Generate certification file and import:
[root@server10 ~] # openssl rand-hex 10 uses openssl encryption for the first 10
A030068247b339b52f37
[root@server10 ~] # echo a030068247b339b52f37 > / root/ks_admin_token
[root@server10 ~] # cat ks_admin_token
A030068247b339b52f37
[root@server10 ~] # export SERVICE_TOKEN=a030068247b339b52f37
6. Import to Fil
[root@server10 ~] # export SERVICE_ENDPOINT= http://server10.example.com:35357/v2.0
7. Configure the keystone file
[root@server10 ~] # openstack-config-- set / etc/keystone/keystone.conf DEFAULT admin_token $SERVICE_TOKEN
[root@server10 ~] # vim / etc/keystone/keystone.conf (delete a default)
8. Start keystone (boot up)
[root@server10 ~] # / etc/init.d/openstack-keystone restart
[root@server10 ~] # chkconfig openstack-keystone on
Note: after startup, check to see if there is any error. Check the startup port.
[root@server10 ~] # grep ERROR / var/log/keystone/keystone.log
9. Create a keystone service (id)
[root@server10] # keystone service-create-name=keystone-type=identity-description= "keystone identity service"
+-- +
| | Property | Value |
+-- +
| | description | keystone identity service |
| | id | 5a1d8b6901f6450fa5b063e6a002601c |
| | name | keystone |
| | type | identity |
+-- +
Note: ID is unique.
10. Create an interface (external internal management interface)
[root@server10 ~] # keystone endpoint-create-- service-id 5a1d8b6901f6450fa5b063e6a002601c\
>-- publicurl 'http://server10.example.com:5000/v2.0'\
>-- adminurl 'http://server10.example.com:35357/v2.0'\
>-- internalurl 'http://server10.example.com:5000/v2.0'
+-- +
| | Property | Value |
+-- +
| | adminurl | http://server10.example.com:35357/v2.0 |
| | id | 714dbd31a3bd45feafa7ca3539525fb2 |
| | internalurl | http://server10.example.com:5000/v2.0 |
| | publicurl | http://server10.example.com:5000/v2.0 |
| | region | regionOne |
| | service_id | 5a1d8b6901f6450fa5b063e6a002601c |
+-- +
11. Create user, role, tenant (tenant)
[root@server10] # keystone user-create-- name admin-- pass westos
+-- +
| | Property | Value |
+-- +
| | email |
| | enabled | True |
| | id | f90b1ed5677a42b0b70544367d804222 |
| | name | admin |
+-- +
[root@server10] # keystone role-create-- name admin
+-- +
| | Property | Value |
+-- +
| | id | ab686060308d470887911c19a8c011b4 |
| | name | admin |
+-- +
[root@server10] # keystone tenant-create-- name admin
+-- +
| | Property | Value |
+-- +
| | description |
| | enabled | True |
| | id | b4aa48fd47724a19a9e09eeb1d8199df |
| | name | admin |
+-- +
twelve。 Add user roles (associate the above users)
[root@server10] # keystone user-role-add-user admin-role admin-tenant admin
13. Edit the kestone management file (create your own)
[root@server10 ~] # vim / root/keystonerc_admin
Export OS_USERNAME=admin
Export OS_TENANT_NAME=admin
Export OS_PASSWORD=westos
Export OS_AUTH_URL= http://server10.example.com:35357/v2.0/
Export PS1=' [\ u@\ h\ W (keystone_admin)]\ $'
14. Clear configuration and load keystone into it at the same time; display keystone users
[root@server10 ~] # unset SERVICE_TOKEN
[root@server10 ~] # unset SERVICE_ENDPOINT
[root@server10 ~] # source / root/keystonerc_admin
[root@server10 ~ (keystone_admin)] # keystone user-list
+-+
| | id | name | enabled | email | |
+-+
| | f90b1ed5677a42b0b70544367d804222 | admin | True |
+-+
Section 3 swift Storage (add two disks on the host)
1. Install swift storage agents, accounts, and containers
[root@server10 ~ (keystone_admin)] # yum install-y openstack-swift-proxy openstack-swift-object openstack-swift-container openstack-swift-account memcached-y
two。 Create users and tenant (remember all names are the same service)
[root@server10 ~ (keystone_admin)] # keystone user-create-name swift-pass westos
+-- +
| | Property | Value |
+-- +
| | email |
| | enabled | True |
| | id | 2e86f4f604cd4edaa535caf8f19af9db |
| | name | swift |
+-- +
[root@server10 ~ (keystone_admin)] # keystone tenant-create-name services
+-- +
| | Property | Value |
+-- +
| | description |
| | enabled | True |
| | id | 4dd069c858834df99733119353d1c822 |
| | name | services |
+-- +
3. Relationship between associated users
[root@server10 ~ (keystone_admin)] # keystone user-role-add-role admin-tenant services-user swift
4. Service creation
[root@server10 ~ (keystone_admin)] # keystone service-create-name swift-type object-store-description "swift storage service"
+-- +
| | Property | Value |
+-- +
| | description | swift storage service |
| | id | 970407c1c93248a3abe25e59e3da9108 |
| | name | swift |
| | type | object-store |
+-- +
5. Define the URL for authentication:
[root@server10 ~ (keystone_admin)] # keystone endpoint-create-service-id 970407c1c93248a3abe25e59e3da9108\
>-- publicurl "http://server10.example.com:8080/v1/AUTH_%(tenant_id)s"\
>-- adminurl "http://server10.example.com:8080/v1/AUTH_%(tenant_id)s"\
>-- internalurl "http://server10.example.com:8080/v1/AUTH_%(tenant_id)s"
+-+
| | Property | Value |
+-+
| | adminurl | http://server10.example.com:8080/v1/AUTH_%(tenant_id)s |
| | id | 2f5a84921b3f4d2ba067f5dec2d9b529 |
| | internalurl | http://server10.example.com:8080/v1/AUTH_%(tenant_id)s |
| | publicurl | http://server10.example.com:8080/v1/AUTH_%(tenant_id)s |
| | region | regionOne |
| | service_id | 970407c1c93248a3abe25e59e3da9108 |
+-+
6. Find the drive letter and create the primary partition
[root@server10 ~ (keystone_admin)] # fdisk-cu / dev/vdb
[root@server10 ~ (keystone_admin)] # fdisk-cu / dev/vdc
7. Format the partition and boot automatically (remember to back up the file before operation)
[root@server10 ~ (keystone_admin)] # mkfs.ext4 / dev/vdb1
[root@server10 ~ (keystone_admin)] # mkfs.ext4 / dev/vdc1
[root@server10 ~ (keystone_admin)] # mkdir-p / srv/node/z {1pm 2} d1
[root@server10 etc (keystone_admin)] # cat / etc/fstab
/ dev/vdb1 / srv/node/z1d1 ext4 acl,user_xattr 0 0
/ dev/vdc1 / srv/node/z2d1 ext4 acl,user_xattr 0 0
[root@server10 etc (keystone_admin)] # mount-a
8. Edit directory permissions
[root@server10 node (keystone_admin)] # chown-R swift:swift / srv/node/
9. Edit security context
[root@server10 node (keystone_admin)] # restorecon-Rv / srv/
Change the type converted from the security context to the swift type format (which can be viewed in real time)
10. Edit the configuration file (you can back up all the files under / etc/swift/swift.conf (account/container/object))
[root@server10 node (keystone_admin)] # openssl rand-hex 10
Ed7addafe2a3477d5e92
[root@server10 node (keystone_admin)] # cat / etc/swift/swift.conf
[swift-hash]
Swift_hash_path_prefix = ed7addafe2a3477d5e92
Swift_hash_path_suffix = ed7addafe2a3477d5e92
Change the bind-ip of the three files to your own ip (192.168.0.110)
# vim / etc/swift/container-server.conf
# vim / etc/swift/account-server.conf
# vim / etc/swift/object-server.conf
11. Start three services (boot automatically)
[root@server10 node (keystone_admin)] # chkconfig openstack-swift-container on
[root@server10 node (keystone_admin)] # chkconfig openstack-swift-object on
[root@server10 node (keystone_admin)] # chkconfig openstack-swift-account on
twelve。 test
Configure Swift Object Storage Service Rings
Create three builder with three commands
[root@server10 node (keystone_admin)] # swift-ring-builder / etc/swift/account.builder create 12 2 1
[root@server10 node (keystone_admin)] # swift-ring-builder / etc/swift/container.builder create 12 2 1
[root@server10 node (keystone_admin)] # swift-ring-builder / etc/swift/object.builder create 12 2 1
Execute the following command: (there is a warning)
# for i in 1 2; do swift-ring-builder / etc/swift/account.builder add z ${I}-192.168.0.110 purl 6002max z ${I} d1 100; done
# for i in 1 2; do swift-ring-builder / etc/swift/object.builder add z ${I}-192.168.0.110 purl 6000 max z ${I} d1 100; done
# for i in 1 2; do swift-ring-builder / etc/swift/container.builder add z ${I}-192.168.0.110 etc/swift/container.builder add 6001Univer z ${I} d1 100; done
12.1 create a load using swift-ring-builder
[root@server10 node (keystone_admin)] # swift-ring-builder / etc/swift/object.builder rebalance
[root@server10 node (keystone_admin)] # swift-ring-builder / etc/swift/container.builder rebalance
[root@server10 node (keystone_admin)] # swift-ring-builder / etc/swift/account.builder rebalance
[root@server10 node (keystone_admin)] # chown-R root:swift / etc/swift/
13. Start the agent service
Deploy the Swift Object Storage Proxy Service
Edit the proxy file (procy-server.conf backup, write it yourself)
[root@server10 node (keystone_admin)] # vim / etc/swift/proxy-server.conf
[filter:authtoken]
Admin_tenant_name = services pay attention to the compilation of names
Admin_user = swift
Admin_password = westos
Auth_host = 192.168.0.110
#
13.2 start memcached and openstack-swift-proxy (remember to boot automatically)
[root@server10 ~ (keystone_admin)] # / etc/init.d/memcached start;/etc/init.d/openstack-swift-proxy start
[root@server10 ~ (keystone_admin)] # chkconfig memcached on;chkconfig openstack-swift-proxy on
14. Create a container:
Validate the Swift Object Storage
Configuration
14.1 intercept 1024 of the files and place them in the object storage area (container)
Note: does it have to be in the / etc/swift directory (yes, because you are uploading the directory and publishing the directory)
Problem: accidentally put filter_authtoken when you do it for the first time
Create files one by one.
[root@server10 swift (keystone_admin)] # head-c 1024 / dev/urandom > data. File (multiple data files are actually created for testing)
# swift upload C1 data1.file directly create container CX and upload files
# swift upload c1 data2.file
# swift upload c1 data3.file
# swift upload c2 data3.file
# swift upload c3 data3.file
# swift list can view the generated three containers (C1, c2, c3)
# swift list C1 to view the data stored in C1 container
# swift delete c3 Delete containers
# swift delete C1 data3.file deletes objects in the container
For more information, please see swift-help.
Create area: swift upload C1 data1.file
C refers to a container
Then check under / srv/node, in fact, the two pieces of storage store the same thing (z1d1 and z2d2)
/ srv/node/z2d1/objects
Section 4: configure the Glance Image service
1. Install openstack-glance softwar
[root@server10 ~ (keystone_admin)] # yum install-y openstack-glance
two。 Edit configuration file
[root@server10 ~ (keystone_admin)] # cp / etc/glance/glance-registry.conf / etc/glance/glance-registry.conf.orig
[root@server10 ~ (keystone_admin)] # cp / etc/glance/glance-api.conf / etc/glance/glance-api.conf.orig
Copy the new configuration file
# cp / usr/share/glance/glance-registry-dist.conf / etc/glance/glance-registry.conf
3. Initialize the glance service and create a password (using the append above via mysql)
[root@server10 ~ (keystone_admin)] # openstack-db-init-service glance-password westos-rootpw westos
You can log in to mysql to view related databases
4. Create user and associated user relationships
[root@server10 ~ (keystone_admin)] # keystone user-create-name glance-pass westos
+-- +
| | Property | Value |
+-- +
| | email |
| | enabled | True |
| | id | 41be9c4c80b74ec4bc9df05636859985 |
| | name | glance |
+-- +
[root@server10 ~ (keystone_admin)] # keystone user-role-add-user glance-role admin-tenant services
5. Edit glance related configuration file glance-api.conf
[root@server10 ~ (keystone_admin)] # vim / etc/glance/glance-api.conf
[paste_deploy]
Flavor = keystone authentication method
[keystone_authtoken] configure authentication method
Admin_tenant_name=services
Admin_user=glance
Admin_password=westos
[DEFAULT]
Qpid_hostname = localhost can add IP if apid is at the remote end
Qpid_username = qpidauth
Qpid_password = westos
Qpid_port = 5671
Qpid_protocol = ssl (5671 is a general encryption interface)
6. Edit configuration file / etc/glance/glance-registry.conf
[paste_deploy]
Flavor = keystone (no space does not affect it)
[keystone_authtoken]
Admin_tenant_name = services
Admin_user = glance
Admin_password = westos
7. Start two service glance-api glance-registry
# chkconfig openstack-glance-api on
# chkconfig openstack-glance-registry on
Check the log. I hope there are no mistakes.
# egrep 'ERROR | CRITICAL' / var/log/glance/*
/ var/log/glance/api.log:2014-07-30 14 Error in store configuration 09 Error in store configuration 13.298 21918 ERROR glance.store.sheepdog: Unexpected error while running command.
Check that there is a sheepdog error, you can leave him alone.
8. Create a new glance service
[root@server10 ~ (keystone_admin)] # keystone service-create-name glance-type image-description "glance image service"
+-- +
| | Property | Value |
+-- +
| | description | glance image service |
| | id | a5806eaa7c4f4b0bac077d344b3e8c3f |
| | name | glance |
| | type | image |
+-- +
9. Create a URL for endpoint
[root@server10 ~ (keystone_admin)] # keystone endpoint-create-service-id a5806eaa7c4f4b0bac077d344b3e8c3f\
>-- publicurl http://server10.example.com:9292\
>-- adminurl http://server10.example.com:9292\
>-- internalurl http://server10.example.com:9292
+-- +
| | Property | Value |
+-- +
| | adminurl | http://server10.example.com:9292 |
| | id | 53bdf3b884724675bf9da11791bc1fbe |
| | internalurl | http://server10.example.com:9292 |
| | publicurl | http://server10.example.com:9292 |
| | region | regionOne |
| | service_id | a5806eaa7c4f4b0bac077d344b3e8c3f |
+-- +
10. Upload image: Use glance to Upload a System Image
[root@server10 ~ (keystone_admin)] # glance image-create-name xxb-is-public True-disk-format qcow2-container-format bare-copy-from http://192.168.0.254/pub/materials/small.img
+-- +
| | Property | Value |
+-- +
| | checksum | None |
| | container_format | bare |
| | created_at | 2014-07-30T06:33:15 |
| | deleted | False |
| | deleted_at | None |
| | disk_format | qcow2 |
| | id | dd5135b4-c2ce-4c66-8b73-454705b2a310 |
| | is_public | True |
| | min_disk | 0 | |
| | min_ram | 0 | |
| | name | xxb |
| | owner | b4aa48fd47724a19a9e09eeb1d8199df |
| | protected | False |
| | size | 92908032 | |
| | status | queued |
| | updated_at | 2014-07-30T06:33:15 |
+-- +
10.1 View image information
[root@server10 ~ (keystone_admin)] # glance image-list
+-+ +
| | ID | Name | Disk Format | Container Format | Size | Status | |
+-+ +
| | dd5135b4-c2ce-4c66-8b73-454705b2a310 | xxb | qcow2 | bare | 92908032 | active |
| | 1e08ab41-58ed-457d-994e-5f8607f5bb67 | xxbandy | qcow2 | bare | 258146304 | active |
+-+ +
10.2 Delete Mirror
[root@server10 ~ (keystone_admin)] # glance delete ID
[root@server10 ~ (keystone_admin)] # glance image-show xxb to view xxb image details
Section 5: create block storage, which is used to hang the CVM on the a
1. Install the block storage software:
[root@server10 ~ (keystone_admin)] # yum install-y openstack-cinder
[root@server10 ~ (keystone_admin)] # cp / etc/cinder/cinder.conf / etc/cinder/cinder.conf.bak
[root@server10 ~ (keystone_admin)] # cp / usr/share/cinder/cinder-dist.conf / etc/cinder/cinder.conf
two。 Initialization
[root@server10 ~ (keystone_admin)] # openstack-db-init-service cinder-password westos-rootpw westos
# openstack-db-- drop-- service cinder you can use this to delete cinder (re-execute) if initialization error occurs.
3. Create corresponding users and associated users
[root@server10 ~ (keystone_admin)] # keystone user-create-name cinder-pass westos
+-- +
| | Property | Value |
+-- +
| | email |
| | enabled | True |
| | id | 912094d6e8c54864aa2606a13daae1c9 |
| | name | cinder |
+-- +
[root@server10 ~ (keystone_admin)] # keystone user-role-add-user cinder-role admin-tenant services
4. Create a volume group
[root@server10 ~ (keystone_admin)] # keystone service-create-name=cinder-type=volume-description= "openstack block storage service"
+-- +
| | Property | Value |
+-- +
| | description | openstack block storage service |
| | id | f8fbbcec6c864ac588f70ee396bb55da |
| | name | cinder |
| | type | volume |
+-- +
5. Create a URL for cinder
[root@server10 ~ (keystone_admin)] # keystone endpoint-create-- service-id f8fbbcec6c864ac588f70ee396bb55da-- publicurl 'http://server10.example.com:8776/v1/%(tenant_id)s'-- adminurl' http://server10.example.com:8776/v1/%(tenant_id)s'-- internalurl 'http://server10.example.com:8776/v1/%(tenant_id)s'
+-+
| | Property | Value |
+-+
| | adminurl | http://server10.example.com:8776/v1/%(tenant_id)s |
| | id | 3116d4a05f2a4dac8dd712b10aaf4d09 |
| | internalurl | http://server10.example.com:8776/v1/%(tenant_id)s |
| | publicurl | http://server10.example.com:8776/v1/%(tenant_id)s |
| | region | regionOne |
| | service_id | f8fbbcec6c864ac588f70ee396bb55da |
+-+
6.。 Back up the configuration file and modify the memory
[root@server10 ~ (keystone_admin)] # cp / etc/cinder/cinder.conf / etc/cinder/cinder.conf.orig
[root@server10 ~ (keystone_admin)] # cp / usr/share/cinder/cinder-dist.conf / etc/cinder/cinder.conf
[root@server10 ~ (keystone_admin)] # vim / etc/cinder/cinder.conf
[keystone_authtoken]
Admin_tenant_name = services
Admin_user = cinder
Admin_password = westos
[DEFAULT]
Qpid_username = qpidauth
Qpid_password = westos
Qpid_protocol = ssl
Qpid_port = 5671
7. Start the service and boot the machine.
[root@server10 ~ (keystone_admin)] # / etc/init.d/openstack-cinder-scheduler start
[root@server10 ~ (keystone_admin)] # / etc/init.d/openstack-cinder-api start
[root@server10 ~ (keystone_admin)] # / etc/init.d/openstack-cinder-volume start
8. Configure shared storage iscsi
Echo 'include/ etc/include/volumes/*' > > / etc/tgt/targets.conf
[root@server10 ~ (keystone_admin)] # / etc/init.d/tgtd start
[root@server10 ~ (keystone_admin)] # # chkconfig tgtd on
9. View the overall status of openstack
[root@server10 ~ (keystone_admin)] # # openstack-status
= = Glance services = =
Openstack-glance-api: active
Openstack-glance-registry: active
= = Keystone service = =
= = Keystone users = =
Authorization Failed: Unable to establish connection to http://server10.example.com:35357/v2.0/tokens
= = Glance images = =
Authorization Failed: Unable to establish connection to http://server10.example.com:35357/v2.0/tokens
If it is normal, there should be no problem (in fact, the problem of not being able to tokens often occurs during the configuration process, so it may be good to keep waiting. I often have this problem when I do it, maybe it depends on my character.
10. Create a volume group vol1 2G (test the application of logical volumes)
Use the cinder tool to create a logical volume size of 2G named vol1
[root@server10 ~ (keystone_admin)] # cinder create-display-name vol1 2
+-- +
| | Property | Value |
+-- +
| | attachments | [] |
| | availability_zone | nova |
| | bootable | false |
| | created_at | 2014-07-30T08:03:05.551543 |
| | display_description | None |
| | display_name | vol1 |
| | id | 7d8bde6b-4d83-439d-839a-1f9d5974d94c |
| | metadata | {} | |
| | size | 2 | |
| | snapshot_id | None |
| | source_volid | None |
| | status | creating |
| | volume_type | None |
+-- +
11. View the current logical volume size
[root@server10 ~ (keystone_admin)] # vgs
[root@server10 ~ (keystone_admin)] # vgs
VG # PV # LV # SN Attr VSize VFree
Cinder-volumes 1 1 0 wz--n- 4.97g 2.97g
Vol0 1 2 0 wz--n- 29.97g 0
It can be identified as long as the volume group is cinder. There is a block stored volume 20g in the answer file when it is deployed.
# cinder list
#
Section 6 Network configuration
1. Create a network
[root@server10 ~ (keystone_admin)] # keystone service-create-name neutron-type network-description 'networking service'
+-- +
| | Property | Value |
+-- +
| | description | networking service |
| | id | ffc971e1288e48df85a56291ddd9c621 |
| | name | neutron |
| | type | network |
+-- +
two。 Specify the appropriate URL
[root@server10 ~ (keystone_admin)] # keystone endpoint-create-service-id ffc971e1288e48df85a56291ddd9c621\
>-- publicurl http://server10.example.com:9696\
>-- adminurl http://server10.example.com:9696\
>-- internalurl http://server10.example.com:9696
+-- +
| | Property | Value |
+-- +
| | adminurl | http://server10.example.com:9696 |
| | id | 2af628a5043a4bb1ab7e5990305c7a84 |
| | internalurl | http://server10.example.com:9696 |
| | publicurl | http://server10.example.com:9696 |
| | region | regionOne |
| | service_id | ffc971e1288e48df85a56291ddd9c621 |
+-- +
3. Create users and associate related
[root@server10 ~ (keystone_admin)] # keystone user-create-name neutron-pass westos
+-- +
| | Property | Value |
+-- +
| | email |
| | enabled | True |
| | id | e8a059a320ef4ed5973bb245e56ceb67 |
| | name | neutron |
+-- +
[root@server10 ~ (keystone_admin)] # keystone user-role-add-user neutron-role admin-tenant services
4. View users
[root@server10 ~ (keystone_admin)] # keystone user-role-list
+-- +-+
| | id | name | user_id | tenant_id | |
+-- +-+
| | ab686060308d470887911c19a8c011b4 | admin | f90b1ed5677a42b0b70544367d804222 | b4aa48fd47724a19a9e09eeb1d8199df | |
+-- +-+
[root@server10 ~ (keystone_admin)] # keystone-os-username neutron-os-password westos-os-tenant-name services user-role-list
+-- +-+
| | id | name | user_id | tenant_id | |
+-- +-+
| | 59d0d13373894bcdb8ad06852a620117 | admin | e8a059a320ef4ed5973bb245e56ceb67 | 3a4b064f7782481fbde472d25d3e496f | |
+-- +-+
5. Install the networking package
[root@server10 neutron (keystone_admin)] # yum install-y openstack-neutron openstack-neutron-openvswitch
Check the status of qpidd
6. Configure the main file:
[root@server10 neutron (keystone_admin)] # vim / etc/neutron/neutron.conf
[DEFAULT]
Rpc_backend=neutron.openstack.common.rpc.impl_qpid
Qpid_hostname = 192.168.0.110
Qpid_port = 5671
Qpid_username = qpidauth
Qpid_password = westos
Qpid_protocol = ssl
[keystone_authtoken]
Admin_tenant_name = services
Admin_user = neutron
Admin_password = westos
[agent]
Root_helper = sudo neutron-rootwrap / etc/neutron/rootwrap.conf
7. Edit the configuration file (modified after the user admin file cp)
[root@server10 ~ (keystone_admin)] # cat / root/keystonerc_neutron
Export OS_USERNAME=neutron modification
Export OS_TENANT_NAME=services modification
Export OS_PASSWORD=westos
Export OS_AUTH_URL= http://server10.example.com:35357/v2.0/
Export PS1=' [\ u@\ h\ W (keystone_neutron)]\ $'
8. Switch to network user neutron
[root@server10 ~ (keystone_neutron)] # yum install openstack-nova-common-y
[root@server10 ~ (keystone_neutron)] # neutron-server-setup-yes-rootpw westos-plugin openvswitch
[root@server10 ~ (keystone_neutron)] # neutron-db-manage-config-file / usr/share/neutron/neutron-dist.conf-config-file / etc/neutron/neutron.conf-config-file / etc/neutron/plugin.ini stamp head
There seems to be something wrong with No handlers could be found for logger "neutron.common.legacy" (but you can ignore it if you don't report an error directly)
9. Start the service
[root@server10 ~ (keystone_neutron)] # / etc/init.d/neutron-server start
[root@server10 ~ (keystone_neutron)] # chkconfig neutron-server on
[root@server10 ~ (keystone_neutron)] # openstack-status found that nova did not start and networking did not start. Continue with the following configuration.
10. Configure the network
[root@server10 ~ (keystone_neutron)] # neutron-node-setup-plugin openvswitch-qhost 192.168.0.110
[root@server10 ~ keystone_neutron] # / etc/init.d/openvswitch start (chkconfig openvswitch on)
11. Configure Interface (br-ex br-int)
[root@server10 ~ (keystone_neutron)] # ovs-vsctl add-br br-int
(ovs-vsctl show view network interface)
[root@server10 ~ (keystone_neutron)] # vim / etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
Integration_bridge = br-int
[root@server10 ~ (keystone_neutron)] # / etc/init.d/neutron-openvswitch-agent start
Starting neutron-openvswitch-agent: [OK]
[root@server10 ~ (keystone_neutron)] # chkconfig neutron-openvswitch-agent on
[root@server10 ~ (keystone_neutron)] # chkconfig neutron-ovs-cleanup on
[root@server10 ~ (keystone_neutron)] # neutron-dhcp-setup-plugin openvswitch-qhost 192.168.0.110
[root@server10 ~ (keystone_neutron)] # / etc/init.d/neutron-dhcp-agent start
Starting neutron-dhcp-agent: [OK]
[root@server10 ~ (keystone_neutron)] # chkconfig neutron-dhcp-agent on
Notice that there is an error when viewing dhcp:
[root@server10 ~ (keystone_admin)] # egrep 'ERROR | CRITICAL' / var/log/neutron/dhcp-agent.log
2014-08-02 13 firewall_driver 36V 31.633 25212 ERROR neutron.common.legacy [-] firewall_driver
It shows the error of firewall, not our service problem.
11.1 configure external Interfac
# ovs-vsctl add-br br-ex
# cp / etc/sysconfig/network-scripts/ifcfg-eth0 / etc/sysconfig/network-scripts/ifcfg-br-ex
# vim / etc/sysconfig/network-scripts/ifcfg-eth0 (3 entries: device name; self-boot; MAC)
# vim / etc/sysconfig/network-scripts/ifcfg-br-ex (device name; MAC cancellation)
# ovs-vsctl add-port br-ex eth0;service network restart
[root@server10 ~ (keystone_neutron)] # rpm-Q iproute
Iproute-2.6.32-130.el6ost.netns.2.x86_64
11.2 configure the neutron network
[root@server10 ~ (keystone_neutron)] # neutron-l3-setup-plugin openvswitch-qhost 192.168.0.110
[root@server10 ~ (keystone_neutron)] # / etc/init.d/neutron-l3-agent start
[root@server10 ~ (keystone_neutron)] # chkconfig neutron-l3-agent on
There are still some false reports.
[root@server10 network-scripts (keystone_admin)] # egrep 'ERROR | CRITICAL' / var/log/neutron/l3-agent.log
2014-08-02 13 firewall_driver 4514 ERROR neutron.common.legacy 27.151 27518 Skipping unknown group key: firewall_driver
[root@server10 ~ (keystone_neutron)] # openstack-status check the network, two of which are not started.
= = Nova services = =
Openstack-nova-api: dead (disabled on boot) does not boot
Openstack-nova-compute: dead (disabled on boot)
Openstack-nova-network: dead (disabled on boot)
Openstack-nova-scheduler: dead (disabled on boot)
= = Glance services = =
In fact, there are still some problems related to nova here, continue to configure the nova node
#
Section 7 nova installation
Switch back to admin user execution
[root@server10 ~ (keystone_admin)] # yum install-y openstack-nova openstack-nova-novncproxy
[root@server10 ~ (keystone_admin)] # source / root/keystonerc_admin
[root@server10 ~ (keystone_admin)] # chown nova:nova / var/log/nova/
Initialize the db database
[root@server10 ~ (keystone_admin)] # openstack-db-init-service nova-password westos-rootpw westos
Create a user
Root@server10 ~ (keystone_admin)] # keystone user-create-name nova-pass westos
+-- +
| | Property | Value |
+-- +
| | email |
| | enabled | True |
| | id | fd4f1d6540464a32b79c8e3a41ba7e70 |
| | name | nova |
+-- +
Bind roles and create services
[root@server10 ~ (keystone_admin)] # keystone user-role-add-user nova-role admin-tenant services
[root@server10 ~ (keystone_admin)] # keystone service-create-name nova-type compute-description "openstack compute service"
+-- +
| | Property | Value |
+-- +
| | description | openstack compute service |
| | id | 7dd84b0c66ea4cd891b11b66a1dab754 |
| | name | nova |
| | type | compute |
+-- +
Create endpoint:URL
[root@server10 ~ (keystone_admin)] # keystone endpoint-create-service-id 7dd84b0c66ea4cd891b11b66a1dab754\
>-- publicurl 'http://server10.example.com:8774/v2/%(tenant_id)s'\
>-- adminurl 'http://server10.example.com:8774/v2/%(tenant_id)s'\
>-- internalurl 'http://server10.example.com:8774/v2/%(tenant_id)s'
+-+
| | Property | Value |
+-+
| | adminurl | http://server10.example.com:8774/v2/%(tenant_id)s |
| | id | ed1ecf2502b64c9eac29f8047fad7fe5 |
| | internalurl | http://server10.example.com:8774/v2/%(tenant_id)s |
| | publicurl | http://server10.example.com:8774/v2/%(tenant_id)s |
| | region | regionOne |
| | service_id | 7dd84b0c66ea4cd891b11b66a1dab754 |
+-+
Modify the configuration file:
[root@server10 ~ (keystone_admin)] # vim / etc/nova/api-paste.ini
[filter:authtoken] the last part of the configuration
Admin_tenant_name = services
Admin_user = nova
Admin_password = westos
Auth_host = 192.168.0.110
[root@server10 ~ (keystone_admin)] # vim / etc/nova/nova.conf
Qpid_hostname=192.168.0.110
Qpid_port=5671
Qpid_username=qpidauth
Qpid_password=westos
Qpid_protocol=ssl
Vncserver_listen=192.168.0.110
Vncserver_proxyclient_address=192.168.0.110
Libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
Auth_strategy = keystone
Libvirt_type=qemu
Libvirt_cpu_mode=none
Verbose=true
Api_paste_config=api-paste.ini
(# for i in / etc/init.d/openstack-nova*;do $I restart;done)
# / etc/init.d/libvirtd start
# / etc/init.d/openstack-nova-api start
# / etc/init.d/openstack-nova-compute start
# / etc/init.d/openstack-nova-conductor start
# / etc/init.d/openstack-nova-consoleauth start
# / etc/init.d/openstack-nova-novncproxy start
# / etc/init.d/openstack-nova-scheduler start
[root@server10 ~ (keystone_admin)] # chkconfig libvirtd on
[root@server10 ~ (keystone_admin)] # chkconfig openstack-nova-api on
[root@server10 ~ (keystone_admin)] # chkconfig openstack-nova-compute on
[root@server10 ~ (keystone_admin)] # chkconfig openstack-nova-conductor on
[root@server10 ~ (keystone_admin)] # chkconfig openstack-nova-consoleauth on
[root@server10 ~ (keystone_admin)] # chkconfig openstack-nova-novncproxy on
[root@server10 ~ (keystone_admin)] # chkconfig openstack-nova-scheduler on
[root@server10 ~ (keystone_admin)] # openstack-status
= = Nova services = =
The corresponding services will start the active status.
= = Keystone users = =
+-+
| | id | name | enabled | email | |
+-+
| | f90b1ed5677a42b0b70544367d804222 | admin | True |
| | 912094d6e8c54864aa2606a13daae1c9 | cinder | True |
| | 41be9c4c80b74ec4bc9df05636859985 | glance | True |
| | fd4f1d6540464a32b79c8e3a41ba7e70 | nova | True |
| | 2ea05745a8684da2bcd7ec12fa522cac | quantum | True |
| | 2e86f4f604cd4edaa535caf8f19af9db | swift | True |
+-+
= = Glance images = =
+-+ +
| | ID | Name | Disk Format | Container Format | Size | Status | |
+-+ +
| | dd5135b4-c2ce-4c66-8b73-454705b2a310 | xxb | qcow2 | bare | 92908032 | active |
| | 1e08ab41-58ed-457d-994e-5f8607f5bb67 | xxbandy | qcow2 | bare | 258146304 | active |
+-+ +
= = Nova managed services = =
+-+ +
| | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | |
+-+ +
| | nova-conductor | server10.example.com | internal | enabled | up | 2014-08-02T09:49:44.000000 | None |
| | nova-compute | server10.example.com | nova | enabled | up | 2014-08-02T09:49:44.000000 | None |
| | nova-consoleauth | server10.example.com | internal | enabled | up | 2014-08-02T09:49:46.000000 | None |
| | nova-scheduler | server10.example.com | internal | enabled | up | 2014-08-02T09:49:39.000000 | None |
| | nova-cells | server10.example.com | internal | enabled | up | 2014-08-02T09:49:43.000000 | None |
| | nova-console | server10.example.com | internal | enabled | up | 2014-08-02T09:49:45.000000 | None |
| | nova-network | server10.example.com | internal | enabled | up | 2014-08-02T09:49:38.000000 | None |
| | nova-cert | server10.example.com | internal | enabled | up | 2014-08-02T09:49:43.000000 | None |
+-+ +
= = Nova networks = =
= = Nova instance flavors = =
+-- +
| | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+-- +
| | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| | 2 | m1.small | 2048 | 20 | 0 | | 1 | 2048 | True |
| | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 4096 | True |
| | 4 | m1.large | 8192 | 80 | 0 | | 4 | 8192 | True |
| | 5 | m1.xlarge | 16384 | 16384 | 0 | 8 | 1.0 | True |
+-- +
= = Nova instances = =
All check OK!
If you can check that there are no errors at this stage, you can rest assured that you can proceed with the following.
Section 8 installation of dashboard
[root@server10 ~ (keystone_admin)] # yum install mod_wsgi httpd mod_ssl openstack-dashboard python-memcached-y
Configure dashboard configuration
[root@server10 ~ (keystone_admin)] # vim / etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "192.168.0.110"
ALLOWED_HOSTS = ['server10example.com',' localhost','192.168.0.110']
CACHE_BACKEND = 'memcached://127.0.0.1:11211'
[root@server10 ~ (keystone_admin)] # source / root/keystonerc_admin
[root@server10 ~ (keystone_admin)] # keystone role-list
+-+ +
| | id | name |
+-+ +
| | 9fe2ff9ee4384b1894a90878d3e92bab | _ member_ |
| | ab686060308d470887911c19a8c011b4 | admin |
+-+ +
[root@server10 ~ (keystone_admin)] # keystone role-create-- name Member creates a member role
+-- +
| | Property | Value |
+-- +
| | id | 9fcca6054e0f45dc8bfb804219199e71 |
| | name | Member |
+-- +
Set apache to access selinux rules correctly
[root@server10 ~ (keystone_admin)] # setsebool-P httpd_can_network_connect on
[root@server10 ~ (keystone_admin)] # / etc/init.d/httpd restart
[root@server10 ~ (keystone_admin)] # chkconfig httpd on
Log in to https://server10.example.com/dashboard (username=admin passwd=westos)
If you can't log in, modify the following file (prompt permission question)
# cd / var/lib/openstack-dashboard/
[root@server10 openstack-dashboard (keystone_admin)] # chown apache:apache .secret _ key_store
Log in again: you can enter the interface!
Of course, when we go in, we will find that many project services are empty, so we need to follow the steps of the first day to create other services step by step.
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 240
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.