Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Nutanix Super-converged deployment case

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

1.1. Network topology

The network topology of the project is shown in the following figure:

Description:

1. This diagram focuses on the wiring and network topology of nutanix devices, while other networks and devices ignore or roughly indicate

two。 The exit network is located in the computer room on the 29th floor, and the equipment is located in the computer room on the 28th floor, which is directly connected by optical fiber, and optical fiber distribution frames are used across floors.

3. New interfaces port5 and port6 need to be configured on the firewall. The configuration of the two interfaces is exactly the same. Configure the link aggregation mode on the firewall and bind the interfaces together.

4. The switch is Huawei S5720-28X. The two switches are stacked and configured using service ports, and then merged into an overall switch.

5. The 10 Gigabit optical ports of the two switches are cascaded to the firewall, and the interface is configured with link aggregation mode.

6. The switch downlink connects the network card of four nodes of the nutanix device, using gigabit port and gigabit network cable.

7. The network connections among firewalls, switches and nutanix are redundant, realizing a complete hot standby state, and any single point of failure of any device can be switched in time to ensure that the network and services will not be interrupted.

1.2. Network redundancy and reliability

In order to meet the requirements of network redundancy and reliability, the network carries out the dual-link design of dual-computer hot standby, which adopts the way of link aggregation and switch stacking to ensure the network reliability after a single point or single machine failure of network equipment. Ensure that the service is not interrupted. A detailed description is shown in the following figure:

1.3. Interface correspondence

The interface correspondence of each device is shown in the following table:

Local equipment

Interface

Description

Opposite end equipment

Interface

Description

Firewall FG3040B

Port5

Physical interface

SW1_HW5720-28X

XGE1

Ten gigabyte optical port uplink

Port6

Redundant interface

SW2_HW5720-28X

XGE1

Ten gigabyte optical port uplink

Switch SW1_HW5720-28X

GE10

Gigabit electric port

Nutanix node a

NIC-0

Gigabit electric port

GE12

Gigabit electric port

Nutanix node b

NIC-0

Gigabit electric port

GE14

Gigabit electric port

Nutanix node c

NIC-0

Gigabit electric port

GE16

Gigabit electric port

Nutanix node d

NIC-0

Gigabit electric port

XGE1

Ten-gigabit optical port

Firewall FG3040B

Port5

Cascade connection

XGE4

Ten-gigabit optical port

SW2_HW5720-28X

XGE16

Stacking

Switch SW2_HW5720-28X

GE10

Gigabit electric port

Nutanix node a

NIC-1

Gigabit electric port

GE12

Gigabit electric port

Nutanix node b

NIC-1

Gigabit electric port

GE14

Gigabit electric port

Nutanix node c

NIC-1

Gigabit electric port

GE16

Gigabit electric port

Nutanix node d

NIC-1

Gigabit electric port

XGE1

Ten-gigabit optical port

Firewall FG3040B

Port6

Cascade connection

XGE4

Ten-gigabit optical port

SW1_HW5720-28X

XGE16

Stacking

Nutanix_NX-1465-g4

NIC-0

Node a

SW1_HW5720-28X

GE10

Gigabit electric port

NIC-1

SW2_HW5720-28X

GE10

Gigabit electric port

NIC-0

Node b

SW1_HW5720-28X

GE12

Gigabit electric port

NIC-1

SW2_HW5720-28X

GE12

Gigabit electric port

NIC-0

Node c

SW1_HW5720-28X

GE14

Gigabit electric port

NIC-1

SW2_HW5720-28X

GE14

Gigabit electric port

NIC-0

Node d

SW1_HW5720-28X

GE16

Gigabit electric port

NIC-1

SW2_HW5720-28X

GE16

Gigabit electric port

1.4. Installation and deployment of nutanix superfusion equipment 1.4.1. Equipment introduction

The purchase of this project is Nutanix1465,2U equipment, 4 nodes, each node 1 480g SSD hard disk, 2 6T HDD hard disk, the equipment front view is as follows:

Local buttons and indicators. Take Node An as an example, as shown in the following figure:

On the back, there are two precursory RJ45 Ethernet ports, two 10 Gigabit SFP ports, one IPMI out-of-band management port, and a dual power supply design, as shown in the figure:

The MAC address and serial number of each network port of the node on the back of the device. For details, please see the following figure:

MAC address of the node on the back of the device

Node

MAC address of IPMI

A

0C:C4:7A:C2:D8:70

B

0C:C4:7A:C2:D9:75

C

0C:C4:7A:C2:D9:48

D

0C:C4:7A:C2:D8:BD

Schematic diagram of host backend network port layout

IPMI

Eth2

Ten thousand trillion mouth

Ten thousand trillion mouth

Eth0

1.4.2. Installation and deployment

To prepare for an environment

1) before deployment, desktop virtualization software needs to be installed on the maintenance computer. You can choose VMware Workstation/Fusion, VirtualBox, etc.

2) decompress the Foundation_VM-3.0.ovf.tar of Foundation 3.0, then import it through the desktop virtualization software, and enable VM

3) after turning on the virtual machine, you can see two files on the desktop, one is set_foundation_ip_address and the other is Nutanix Foundation

4) Click set_foundation_ip_address, set the IP, subnet mask, gateway and DNS of Foundation VM, and then select Save.

5) Click Nutanix Foundation, start the Nutanix installation and configuration process, and enter the global configuration interface, which needs to be filled in according to the planned network configuration, including IPMI, Hypervisor and CVM information. Click next when you are done.

Note:

All IP are set to the same network segment, including Hypervisor, CVM and IPMI. Because layer 2 access is provided through the MAC address of IPMI during initialization, and it is not recommended to modify the address of CVM and Hypervisor later, it is recommended to set the address of IPMI to the business network segment. After initialization, you can modify the IPMI address.

Note the IPMI account: ADMIN (uppercase)

6) when you enter the Block/Node configuration interface, Foundation will automatically detect the Block/Node status. If it cannot be detected, you need to enter the information manually. Click on the following figure to add node Add Node.

7) according to the actual hardware situation, we only have one Block, and the number of nodes in each Block is 4, and then click create.

8) then you need to enter the specific information of each node, including the MAC address of IPMI (below the IPMI port of the physical node), the IP address, Hypervisor IP address and CVM IP address of IPMI; if you install four nodes at the same time, check the previous Position option; then click next

9) Select the Hypervisor you want to install, select KVM here, then select Package and Hypervisor ISO Image of NOS, and click next

Note: the installation package for NOS needs to upload the installation package nutanix_installer_package-danube--.tar to / home/nutanix/foundation/nos in Foundation in advance.

10) Click to create a Cluster, and enter Cluster Name, Cluster external IP address, CVM DNS server, CVM NTP server, Hypervisor NTP server, maximum number of redundancy

11) Select Block and Node for Cluster, and select all here

12) after checking the information, click Run Installation, and then wait for the installation to complete.

Network configuration

After installation, you need to configure VLAN, which is configured on Hypervisor and CVM respectively

1) first log in to the host of Hypervisor through the command line tool SSH

For example, ssh root@10.120.189.8 sets VLAN to Hypervisor

Ovs-vsctl set port br0 tag=205

Then execute the command to view the existing virtual network port

Ovs-vsctl list port br0

2) Log in to CVM under Hypervisor

Ssh nutanix@192.168.5.254

Then execute the command to modify the VLAN settings

Change_cvm_vlan 205

Note: the above IP address is an example, please modify it according to the plan when in actual use.

1.4.3. Switch on and off steps

Turn off the computer

1) notify the virtual machine owner to shut down all virtual machines

2) ssh remotely logs in to any CVM and executes the command cluster stop to shut down the cluster

3) the nodes will shut down in turn, which will take about 15-20 minutes.

4) after the four CVM are shut down, press the node power key to shut down (the shutdown key is out of order, press 3 seconds to shut down).

Start-up

1) press the node power button in turn (generally A, B, C, D boot, regardless of the actual node boot sequence)

2) after 3-5 minutes, the nodes are powered on in turn

3) log in to any CVM with SSH, and execute the command to enable cluster: cluster start

4) after the cluster starts successfully, test the connectivity of the cluster address and observe the cluster status: cluster status

5) after logging in to Prism, observe the alarm message. If normal, enable each virtual machine.

View cluster status

Enter: cluster status after login

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report