In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >
Share
Shulou(Shulou.com)06/01 Report--
When designing a data center, the most common approach is to use a three-tier approach. This method includes the classical access layer, aggregation layer and core layer, which is often referred to as three-layer topology. Data center design is evolving from this three-tier approach to a more specific data center, showing a modern trend towards a two-tier backbone-leaf node architecture.
Fibre Channel storage is integrated on Ethernet and can coexist with other Ethernet traffic and IP traffic.
FCoE is not required; many virtualized data center deployments use IP storage.
An important difference between different virtualized data centers is the network matrix itself.
When building a data center network matrix, be sure to take into account the number of virtual machines and applications that will run on each host in the future, and this information can provide guidance for using overload ratios.
The virtualized data center design uses a variety of QoS features to provide different priorities for various traffic models that use the same uplinks that connect to the first ToR switch. Typical types of applications running in virtualized data centers often use the so-called three-tier application model: a combination of specific applications, databases, and Web servers. Each layer usually runs on a dedicated virtual machine. In enterprise deployments, databases are often hosted on bare metal servers.
By running multiple virtual machines and applications on the same physical computer, hardware resources can be used more efficiently.
Storage is treated as a pool of resources without physical boundaries.
Storage virtualization is suitable for large storage area network (SAN) arrays, logical partitions of local workstation hard drives, or redundant arrays of independent disks (RAID).
Load balancing allows users to access multiple web servers and applications as one instance, rather than one instance per server.
Orchestration refers to the coordinated configuration of virtualized resource pools and virtual instances.
Big data's attributes: data volume, rate, type and complexity.
Big data usually does not contain a unique identifier.
Big data components need to be integrated with the current business model of the enterprise.
Networks that cannot effectively handle burst traffic will drop packets, so devices need to optimize buffers to withstand burst traffic.
When selecting switches and routers, it is important to ensure that the architecture uses buffer and queue policies that can effectively handle bursts of traffic.
Good network design must take into account the possibility of unacceptable congestion at key locations in the network under real load.
Overloading the configuration of the network requires a high cost. The generally acceptable overload ratio is about 4:1 for the server access layer and 2:1 between the access layer and the aggregation layer or core.
The recommended configuration of a cluster depends on workload characteristics.
Any optimization related to network latency must start with network-level analysis. "Architecture first, device later" is an effective strategy. The impact of application-level delay on workload is much greater than network-level delay, and application-level delay is mainly caused by application logic.
Network traffic is usually an east-west traffic pattern in a data center. Large-scale deployment can be achieved through the POD model.
Big data is based on IP, while HPC is usually based on Ethernet rather than IP.
HPC storage usually does not require fibre Channel, and any particular storage network is not constrained by the address on the switch.
The typical network overload ratio designed by HPC is 2:1. For a lower-cost design, you can increase the overload ratio, usually up to 5:1.
High Frequency Trading (HFT) environment is the most representative ultra-low latency (ULL) data center.
Source replication provides the fastest way to copy market data information to different target servers (called source processors) that process market data. (Nexus 3548-50 nanoseconds)
The very large data Center (MSDC) system is a reference architecture based on the clos matrix and built on the Cisco platform.
EoR is a classic data center model in which the switch is placed at one end of the data center cabinet column. Each rack has a cable connected to the column-end network device.
MoR is a variant of the EoR model. With MoR, the network is located in the middle. The requirement of cable length is reduced.
ToR is currently the most commonly used. It defines the architecture for connecting servers to switches located in the same rack.
When designing a network, you will assume that some components will fail, rather than trying to make the network components flawless.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 297
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
Huawei USG ipsec Comprehensive case Annex: http://down.51cto.com/data/2364710
© 2024 shulou.com SLNews company. All rights reserved.