Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to log in to the CVM remotely in the construction of Openstack platform

2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article is about how to remotely log in to the CVM in the construction of the Openstack platform. The editor thinks it is very practical, so I hope you can learn something after reading this article. Let's take a look at it with the editor.

Note: the main control node is server10.example.com;. The new nova node is desktop10.example.com.

DNS parsing has been done for each host in the experimental environment.

1. Manage neutron node services and configure network services for nova-compute nodes

[root@server10 ~] # source / root/keystonrc_admin

[root@server10 ~ (keystone_admin)] # vim / etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

Tenant_network_type = vlan tenant type

Network_vlan_ranges = physnet1:1:100 vlan address pool

Bridge_mappings = physnet1:br-eth2 bridge interface

Create the appropriate interface:

[root@server10 ~ (keystone_admin)] # ovs-vsctl add-br br-eth2

[root@server10 ~ (keystone_admin)] # ovs-vsctl add-port br-eth2 eth2

[root@server10 ~ (keystone_admin)] # ovs-vsctl show

Restart all neutron-related services (condrestart) and check the logs (because minor errors in any part of the build process will affect the entire platform)

[root@server10 ~ (keystone_admin)] # for i in / etc/init.d/neutron-*;do $I condrestart;done

[root@server10 ~ (keystone_admin)] # grep ERROR / var/log/neutron/openvswitch-agent.log

Check out the native startup service:

[root@server10 ~ (keystone_admin)] # nova-manage servece list shows only the node enable of the current host (because we are all in one, that is, all services are deployed on one host)

two。 Add a nova node and configure the same nova-compute service

Add a new host, and the memory and hard disk can be expanded according to your actual needs. (nova-compute nodes mainly run CVMs and related services, so you want to see how much is more suitable for your environment. Because it is in the experimental environment, we share the host resources with nova-compute)

It is worth noting, however, that we need to set the IP of the nova-compute node to static.

New node configuration for nova-compute:

# yum update-y; reboot

# yum install openstack-nova-compute-y

Since we have correctly configured nova-compute and neutron on the server10.example.com host, we will make a full copy of the configuration file on the new nova node and finally make appropriate fine-tuning.

[root@server10 nova (keystone_admin)] # scp / etc/nova/nova.conf 192.168.0.10:/etc/nova/

[root@desktop10 nova] # vim / etc/nova/nova.conf

My_ip=192.168.0.10

Vncserver_listen=192.168.0.10

Vncserver_proxyclient_address=192.168.0.10

Libvirt_type=kvm (remember, because you are using KVM for real machines and qemu for virtual machines)

You must pay attention to the link with mysql.

Connection=mysql://nova:westos@192.168.0.110/nova uses the nova user westos password to log in to the nova library (the default copy is linked to localhost)

[root@desktop10 nova] # / etc/init.d/openstack-nova-compute start

[root@desktop10 nova] # / etc/init.d/libvirtd start

Note: in fact, the libvirtd service should be started in the environment!

[root@desktop10 nova] # chkconfig openstack-nova-compute on

[root@desktop10 nova] # chkconfig libvirtd on

[root@desktop10 nova] # nova-manage service list

Binary Host Zone Status State Updated_At

Nova-conductor server10.example.com internal enabled: -) 2014-08-03 03:42:26

Nova-compute server10.example.com nova enabled: -) 2014-08-03 03:42:26

Nova-consoleauth server10.example.com internal enabled: -) 2014-08-03 03:42:25

Nova-scheduler server10.example.com internal enabled: -) 2014-08-03 03:42:25

Nova-compute desktop10.example.com nova enabled: -) 2014-08-03 03:42:31

We know from the output that a new nova-compute node, desktop10.example.com, is added.

At the same time, in order to make it easier for us to distinguish which node the subsequent CVM is started on, we will disable the nova-compute of server10 first.

[root@desktop10 nova] # nova-manage service disable-host server10.example.com-service nova-compute

At this point, there will be only one nova-compute node in the entire environment, which means that the final CVM starts on the desktop10 host, which consumes the hardware resources of desktop10.

3. Configure neutron network services for nova-compute nodes

[root@desktop10 nova] # yum install openstack-neutron-openvswitch-y

[root@desktop10 neutron] # scp 192.168.0.110:/etc/neutron/neutron.conf / etc/neutron/

[root@desktop10 neutron] # scp 192.168.0.110:/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini / etc/neutron/plugins/openvswitch/

[root@desktop10 neutron] # / etc/init.d/openvswitch start

[root@desktop10 neutron] # chkconfig openvswitch on

[root@desktop10 neutron] # ovs-vsctl add-br br-int create a virtual bridge interface br-int

[root@desktop10 neutron] # ovs-vsctl add-br br-eth2

[root@desktop10 neutron] # ovs-vsctl add-port br-eth2 br101 Native br101 is bridged to br-eth2

[root@desktop10 neutron] # / etc/init.d/neutron-openvswitch-agent start

Starting neutron-openvswitch-agent: [OK]

[root@desktop10 neutron] # chkconfig neutron-openvswitch-agent on

[root@desktop10 neutron] # chkconfig neutron-ovs-cleanup on

[root@desktop10 neutron] # tail-f / var/log/neutron/openvswitch-agent.log

[root@desktop10 neutron] # ovs-vsctl show

0d1feaba-56ce-4696-9d16-0a993cff5923

Bridge br-int

Port "int-br-eth2"

Interface "int-br-eth2"

Port br-int

Interface br-int

Type: internal

Bridge "br-eth2"

Port "br-eth2"

Interface "br-eth2"

Type: internal

Port "br101"

Interface "br101"

Port "phy-br-eth2"

Interface "phy-br-eth2"

Ovs_version: "1.11.0"

It is basically correct to view the above output.

4. It should be noted that when you add a new nova node, you need to install a dependency package to communicate with the master node (server10), as follows:

[root@desktop10 neutron] # yum install-y openstack-neutron-ml2.noarch

[root@desktop10 nova] # vim / etc/nova/nova.conf

Novncproxy_base_url= http://192.168.0.110:6080/vnc_auto.html

Glance_host=192.168.0.110 allows nova-compute to find the glance file when creating a CVM

Rpc_backend=nova.openstack.common.rpc.impl_qpid allows nova-compute to communicate with the master.

[root@desktop10 nova] # / etc/init.d/openstack-nova-compute restart

[root@desktop10 nova] # chkconfig openstack-nova-compute on

What needs to be configured next has been configured.

Now use the admin user to log in to https://server10.example.com/dashboard to create related projects and user services.

1. Create a project

two。 Creating a user (admin identity and membership) identity refers to the user's role, that is, determining the user's permissions

The user with the identity of admin then operates.

3. Upload image in glance service, we have uploaded small.img and web.img.

4. Create a network: create a network for a project (external network) to create a network, internal network and external network communication

Point Net1 creates a subnet private network 172.24.X.0 (no gateway is required) the 172network segment simulates a public network address on the real host

Do not use DHCP, subnet details

Create a network: the 192.168.32.0 network segment of the private network is actually the br101 interface bridged before.

Point net2 to create an intranet 192.168.32.0

If you take a look at the network topology diagram, you can see that we have actually created two networks (172 and 192).

5. Add rout

Create a route selection gateway. (view topology) add routing information to connect the two networks to achieve internal and external network communication

Add routing interface to link two network segments

6. Create a security group and set rules to customize access rules, which is equivalent to a simple firewall policy

Create a key pair and download the

Assign floating ip (public network IP) assign a floating IP to the 172public network for customer communication

7. Create the CVM type (that is, the configuration of the host) and define the relevant configurations of the required hosts.

Cloud disk

8. Start the CVM to start the CVM and select the appropriate information (hostname.)

Administrator password, which is the administrator password for accessing the CVM, but cannot be used by default. The choice of security group

Select a VPC (view network topology)

Click more to bind the floating IP (for direct login access to the CVM)

Start the CVM:

It is found that the CVM has already started, and the default IP obtained is 192.168.32.2.

[root@desktop10 nova] # virsh list use the virsh command to view

Id Name State

1 virtual machine whose server10 running server10 is KVM

2 instance-00000003 running this is the CVM of Openstack

Simulated public network IP login test: use the previously created key pair to remotely log in to the CVM (172.24.10.4 is the public network IP assigned at that time)

Test the interior of the system after remote login:

The above is how to remotely log in to the CVM in the construction of the Openstack platform. The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report