Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Can copy and implement the commercial ultra-high availability scheme: proxmox + haproxy, etc.

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Current situation

There are a large number of single-point problems: one physical server per store and multiple servers in the central computer room. Store server failure, business affected; central computer room server failure, store non-cash business (bank card swiping, WeChat Pay, Alipay, etc.) affected

General thinking

Cancel the servers of each store to ensure the reliability of the store network (multi-line access, 4G terminal equipment, etc.), and the servers are concentrated in the central computer room to build a more highly available data platform.

Basic goal

High availability: minimum downtime, some hardware damage does not affect normal business.

Scalability: with the increase of business, the capacity expansion of business can not be stopped, and the existing system architecture can not be changed.

Visual operation and maintenance: keep abreast of the operation of the system and display it in a centralized and intuitive way.

Low cost: make full use of existing resources, reasonable planning, so that the cost of the whole platform can be controlled and meet the actual needs.

Architecture composition

The architecture of this scheme is composed of load balancing, super-converged private cloud, monitoring platform and backup system.

Ad load balancer

Responsible for forwarding the request of the store terminal to multiple identical back-end applications according to a certain algorithm. Load balancing actually includes three functions: load balancing, health check and failover.

Load balancing: multiple backends share the load to support larger-scale access and business requests

Health check: when one or more of the backend services fails, the load balancer automatically removes the faulty system from the forwarding queue; when the backend service returns to normal, it will automatically join the forwarding queue

Failover: load balancers appear in pairs and are generally set to one master and one backup. Once the primary load balancer fails, the secondary load balancer automatically takes over its work.

Super converged private cloud

Three or more highly configured physical servers form decentralized clusters and decentralized storage. As long as the smallest unit of the cluster exists, the whole cluster will not collapse; if the virtual machines running on physical nodes are set to HA (high availability), if the physical machines fail, these virtual machines will automatically drift to other physical nodes that are running normally.

Superfusion has the following characteristics:

Decentralization: there is no special control node, so there is no need to consider the single point problem of the control node.

Centralized storage: the traditional private cloud cluster architecture ensures that availability is achieved by shared storage. But shared storage itself is a single point, although it can improve availability in the way of multi-disk redundancy and dual controllers, there is still a dilemma that IO is centralized and performance can not be improved.

Lower construction cost: get rid of expensive centralized storage, disk distribution to local physical servers, the investment cost is greatly reduced.

Online capacity expansion: do not stop service, expand the capacity of physical machine accessories (memory, cpu, etc.), and even add physical nodes.

System monitoring

Real-time monitoring of the operation status of host resources (including physical nodes and virtual machine nodes) and real-time monitoring of applications or services can give timely and effective alarm in the event of failure.

ZD data backup

Backup consists of two parts, one is to back up important virtual machines, and the other is to back up application data. The purpose of virtual machine backup is for rapid fault recovery, and the application of data backup is for data integrity.

With the above multiple safeguards, the availability of the entire platform has been improved by several orders of magnitude. Considering the failures in several scenarios, let's further describe their availability and reliability:

Virtual machine failure. Load balancer plays a role, client access will not be affected, and business will not be interrupted.

Physical machine failure. The super-convergence mechanism plays a role, the applications running on it (including virtual machine systems) drift automatically, the client access is not affected, and the business will not be interrupted.

The cluster as a whole collapsed. The backup system plays a role, reconstructs a new cluster, connects the backup data in a network way, selects backup files in the web interface, clicks restore, and waits for virtual machines to recover quickly. The traditional recovery method goes something like this: reinstall the system à deploy the application environment à copy backup data into the target system à import data à verify data validity and integrity à recovery service.

The monitoring system is a sleepless eye, once the fault, immediately alarm, can inform the technical staff to repair in time.

infrastructure

Load balancer

A pair of independent servers that do not require high configuration. Recommended configuration: single cpu,32g memory, 300g 15000 to SAS disk (the main data is access log).

Super converged private cloud

There are at least four physical servers, and the data network is separated from the cluster network. A 10-gigabit network is recommended. Under unconditional circumstances, full gigabit must be guaranteed. The specific configuration recommendations for a single CPM are as follows:

Cpu:2. The number of each core is 10, multithreaded.

Memory: at least 128G ddr3 is OK, mainly depending on the motherboard.

Hard disk: system disk 250g solid state disk, 10000 sas high performance disk with 4 data disks or more 2.4T capacity (sata disk read and write performance is poor, not recommended).

Network card: if you use 10 Gigabit network, you need to purchase network card and optical fiber module separately.

Data backup

The memory and cpu are low, and the disk uses multiple Sata disks with low speed and large capacity, and the backup capacity is greater than the sum of other data. In order to reduce the backup time and make effective use of the storage space, it is not necessary to back up all the data, as long as it can ensure that the whole system can be restored quickly in the event of a catastrophic failure.

Monitor and control system

Single physical machine, general configuration is OK. In order to ensure reliability, the whole system can be backed up automatically.

Main software

Load balancing

Keepalived + haproxy

Super converged private cloud

System: debian

Management platform: proxmox VE 5.3

Storage: ceph

Monitor and control system

System: centos 7

Management platform: centroen 18

Backup system

System: centos 7 or freebsd

Sharing: NFS

Implementation steps

1. Deploy super-converged private cloud

Initialize the cluster, create ceph storage (monitor, OSD, POOL)

Hook up shared storage and upload the operating system ISO

Create a virtual machine

Install the virtual machine operating system

Virtual machine settings are highly available (HA).

The virtual machine can be made into a template, can be cloned, can be migrated manually, and automatically drifted to be qualified in case of physical function.

two。 Deploy the application

Install the required applications on the virtual machine, check them correctly and make a template

Clone the virtual machine with template and change its network address after startup to ensure the uniqueness of the virtual machine.

Import data

Test the correctness of the service.

The application deployment shall be completed by Party An and Party B shall cooperate.

3. Load balancing

Installation system

Install softwar

Function configuration

Function test

4. Monitor and control system

Installation system

Monitoring item configuration

Simulate failure and fault recovery.

5. Data backup

Prepare the space and assign the appropriate permissions

Set automatic backup time

Temporarily set a relatively recent point in time, and select a few virtual machines for backup

Check to see if the automatic backup works.

Delete the backed-up virtual machine manually and test it with the data recovery just now to verify its reliability and correctness.

Project delivery

Individual functions are all normal: load balancing, monitoring system, backup, failover, etc.

The overall function is normal: the terminal system can carry out all kinds of business normally, such as cashier, order processing, commodity input, etc.

Technical training: module function explanation, risk notification (which functions are best not to be tried easily), virtual machine control, data backup, monitoring items increase or decrease

Free maintenance period: three months from the date of delivery

Need to purchase hardware

Serial number

Name

Configuration

Procurement

Quantity

Price

Total price

Warranty period

one

Load balancing pair

Existing equipment, 32G memory 600g hard disk

No need

0

two

Backup server

Existing equipment, 32 GB memory, 8-12 SATA disks

No need

0

three

Server replacement CPU

Change to 10H20C CPU 2470V2

Need

ten

four

Server increases memory

Increase the memory of each server to 160GB

Need

twenty

five

Server system disk

The server uses a separate system disk, one per server, for a total of 5 servers

Need

five

six

10 Gigabit network card

10Gb/s 10 Gigabit Network Card + 10 Gigabit Fiber jumper

Need

five

seven

10 gigabit module

10Gb/s 10 Gigabit fiber module

Need

ten

eight

SAS ten thousand rotating hard disk

The server is replaced by a SAS hard disk with ten thousand revolutions to improve performance, with 4 servers per server, a total of 5 servers.

Need

twenty

nine

24-port 10 Gigabit switch

24-port full 10 Gigabit; support 4K VLAN; support Guest VLAN, Voice VLAN; support GVRP protocol; support MUX VLAN function; support MAC/ protocol / IP subnet / policy / port based VLAN; support 1:1 and NRV 1 VLAN Mapping function; MAC features: support MAC address automatic learning and aging; support static, dynamic, black hole MAC entries; support source MAC address filtering

IP routes: static routes, RIPv1/2, RIPng, OSPF, OSPFv3, ECMP, ISIS, ISISv6, BGP, BGP4+, VRRP, VRRP6

According to the reality

Situation selection

16 mouthfuls

Ten trillion or

24 mouthfuls

10 Gigabit switching

Machine

one

ten

16-port 10 Gigabit switch

16-port full 10 Gigabit; support 4K VLAN; support Guest VLAN, Voice VLAN; support GVRP protocol; support MUX VLAN function; support MAC/ protocol / IP subnet / policy / port based VLAN; support 1:1 and NRV 1 VLAN Mapping function; MAC features: support MAC address automatic learning and aging; support static, dynamic, black hole MAC entries; support source MAC address filtering

IP routes: static routes, RIPv1/2, RIPng, OSPF, OSPFv3, ECMP, ISIS, ISISv6, BGP, BGP4+, VRRP, VRRP6

one

Total value added tax included

Project implementation service

The implementation of the project needs to be carried out on site, and the costs involved include travel expenses, accommodation fees and meals. Because the super integration platform, load balancing platform and monitoring platform all use open source soft armor, there is no licensing fee, while various commercial applications such as erp running on the platform, licensing and fee problems are solved and responsible by the project party.

Name

Amount

Travel

The project party will pay, but we will not advance.

Implementation cost

Yuan, excluding tax.

Authorization

No, if you need to pay, the project will handle it on its own.

The project implementation cycle is expected to be two weeks.

The acceptance criteria are as follows:

Super fusion system

Ability to create virtual machines and install operating systems

Ability to migrate virtual machines

Ability to create templates

Ability to create virtual machines from templates

Ability to clone the created virtual machine

Ability to destroy virtual machines

Ability to back up virtual machines

Ability to restore virtual machines from backup

Turn off the physical server, run it and set the virtual machine to drift automatically in HA.

Load balancing

Health check: shut down a virtual machine or application without affecting the service

Failed handover (1): shut down the main load balancer, vip drifts automatically, and the forwarding service is not affected.

Failed handover (2): the master load balancer is restored, the vip is returned to the master load balancer, and the forwarding service continues.

Monitor and control system

Can add monitoring items normally

Be able to check grammar

Simulation of host failure, monitoring can alarm in real time

Simulation of service failure, monitoring can be real-time alarm.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report