Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the layer 7 protocol of OSI network

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

Most people do not understand the knowledge points of this article "what is the layer 7 protocol of OSI network", so the editor summarizes the following contents, detailed content, clear steps, and has a certain reference value. I hope you can gain something after reading this article. Let's take a look at this "what is the layer 7 protocol of OSI network" article.

OSI network layer 7 protocol

Before talking about the network, be sure to mention the layer 7 protocol of the OSI network.

OSI is the abbreviation of Open System Interconnect, which means open systems interconnection.

The picture above is a very familiar seven-layer OSI network model, and the corresponding TCP/IP model.

The functions of the application layer are file transfer, e-mail and file services. The main protocols used are HTTP,SMTP and FTP.

The functions of the presentation layer are data formatting, transcoding and data encryption.

The function of the session layer is to release or establish contact with other nodes.

The function of the transport layer is to provide an end-to-end interface, and the protocols used are mainly TCP and UDP.

The function of the network layer is to route packets and the protocol used is IP.

The function of the data link layer is to transmit addressed frames and check for data errors.

The function of the physical layer is to transmit data on the physical medium as binary data.

Delay and bandwidth

Recently, the telecom clerk has always called me, saying that he wants to upgrade the telecom broadband at home from 100m to 500m, which costs only one yuan a day. One dollar is small, but it is also hard-earned money. So do you want to do it or not? How much does upgrading to 500m help improve performance and latency?

2020 can be called the first year of 5G in China. Regardless of Huawei, Citic's ability to develop 5G base stations and protocols. Intuitive feeling that there are more 5G mobile phones, and the mobile phone business hall is also working hard to upgrade you to the 5G package, so do it or not?

Before answering these two questions, let's learn two nouns:

Delay: the time it takes for a packet to be sent from the information source to the destination.

Bandwidth: the maximum throughput of a logical or physical communication path.

If you visit a website, such as www.flydean.com, let's take a look at how data gets from the server to your computer.

First, data is transferred from the server to ISP via Ethernet (Ethernet is a computer LAN technology).

What is ISP? ISP is the Internet Service provider (Internet Service Provider). Only through ISP can you connect the server to the Internet.

The Internet is formed through the interconnection between Internet service providers (ISP) in the backbone network.

So ISP is a. Big agent.

All right, the data is sent to the ISP that serves my home, and then it is transmitted to my WiFi through optical fiber or cable, and then it is received by my computer through the wireless signal of WiFi.

01

Composition of delay

After discussing and analyzing the transmission lines of the data, let's take a look at the reasons why the delay will be related.

First of all, it must be the distance of the signal transmission, the longer the distance, the slower the transmission rate, the longer the time required.

Then there is the length of the message. It takes time for us to transfer all the bits in the message to the link. The longer the message length, the more time it takes.

After the data is on the link, it also takes time to process the packet header, check bit errors and determine the packet destination.

Finally, it takes time for us to queue up the packet data.

Now the transmission medium of the main network is basically optical fiber, light propagates in the optical fiber, not in a straight line, and also has the influence of refractive index, so the speed is a little slower than that of light propagating in a vacuum.

For example, if the signal revolves around the equator, it only takes about 200ms. It's already fast.

It is true that 200ms is fast, but for some application scenarios with high real-time requirements, we can use CDN technology (Content Delivery Network, content distribution network) to deploy content in the global network and then fetch data from the nearest place. Thus the transmission delay is greatly reduced.

200ms is fast enough, but why do we still feel slow?

We have all heard the principle of buckets. The amount of water a bucket can hold depends on the shortest piece of wood. Similarly, for network latency, the speed of improvement does not depend on how advanced technology you use in the backbone, because no matter how much or worse the improvement is in millimeters.

What really determines the network speed is the last kilometer, that is, the transmission rate of your cable, the transmission rate of your wifi, and the processing rate of your computer.

It's good to have access to higher bandwidth, especially when transmitting large chunks of data, such as listening to music, watching videos, or downloading large files online. However, daily Internet browsing requires smaller resources from dozens of hosts, and round-trip time becomes a bottleneck.

IP protocol

IP, or Internet Protocol (Internet Protocol), is responsible for routing and addressing between networked hosts.

The basic unit transmitted by various physical networks in the link layer is the frame (MAC frame). The frame format varies with the physical network, and the physical address (MAC address) of each physical network also varies with the physical network. The function of Ip protocol is to provide a unified IP packet to the transport layer (TCP layer), that is, to convert different types of MAC frames into unified IP packets, and to transform the physical address of the MAC frame into a unified logical address (IP address) of the whole network.

01

IP packet

What is a packet (data packet)?

Data packet is also a form of packet switching, in which the transmitted data is segmented into packets and then transmitted.

Each packet has a header and a message, and there are necessary contents such as the destination address in the header, so that each packet can accurately reach its destination without going through the same path. Reassemble and restore the original sent data at the destination.

Let's take a look at the composition of IP packets.

Note that the Total Length section above takes up 2 bytes, so the maximum length of the IP packet is 2 ^ 16-1 million 65535 bytes.

02

Fragmentation and recombination

The link layer has the characteristic of maximum transmission unit MTU, which limits the maximum length of data frames, and different network types have an upper limit. If there is a packet to be transmitted in the IP layer, and the length of the packet exceeds the MTU, then the IP layer will fragment the packet so that the length of each piece is less than or equal to MTU.

The fragmented IP packet can be reassembled only when it arrives at its destination. Reassembly is done by the destination's IP layer, which aims to make the sharding and reassembly process transparent to the transport layer (TCP and UDP).

03

MSS and MTU

The abbreviation for MSS maximum transfer size is a concept in the TCP protocol.

MSS is the most big data segment that a TCP packet can transmit each time. In order to achieve the best transmission performance, the TCP protocol usually negotiates the MSS value of both parties when establishing a connection. When the TCP protocol is implemented, it is often replaced by the MTU value (need to subtract the size 20Bytes of the IP packet header and the packet header 20Bytes of the TCP data segment). The communication parties will determine the maximum MSS value of the connection according to the minimum value of MSS provided by both parties.

In general, the Ethernet MTU is 1500, so in Ethernet, the TCP MSS is often 1460.

TCP

TCP, or Transmission Control Protocol (Transmission Control Protocol), is responsible for providing a reliable abstraction layer over unreliable transmission channels, hiding most of the complex details of network communication from the application layer, such as packet loss retransmission, sequential transmission, congestion control and avoidance, data integrity, and so on.

01

TCP three-way handshake

In general, using the TCP protocol, three interactions are required if client and server are to agree to establish a connection.

SYN

The client selects a random sequence number x and sends a SYN packet, which may also include other TCP flags and options.

SYN ACK

The server adds 1 to x, selects its own random serial number y, appends its own flags and options, and returns a response.

ACK

The client adds 1 to x and y and sends the last ACK packet during the handshake.

02

Congestion collapse

If several IP packets arrive at the router at the same time and expect to be forwarded through the same output port.

Obviously, not all packets can be processed at the same time, there must be a service order, and the cache on the intermediate node provides some protection for the packets waiting for service.

However, if this condition is persistent, the router can only discard packets when the cache space is exhausted.

In this state of continuous overload, the network performance will decline sharply.

03

Flow control

Flow control is a mechanism to prevent the sender from sending too much data to the receiver. Otherwise, the receiver may not be able to handle it because it is busy, loaded, or the buffer is established.

To achieve flow control, each side of the TCP connection advertises its own receive window receive window (rwnd), which contains information about the size of the buffer space that can hold the data.

The original TCP specification assigned a 16-bit field to the size of the advertisement window, which is equivalent to setting the maximum value of the sender and receiver windows (65535 bytes). As a result, optimal performance is often not achieved within this limit.

To solve this problem, RFC1323 provides the TCP window zoom (TCP Window Scaling) option, which can increase the receive window size from 65535 bytes to 1G bytes.

So now the problem is that rwnd is only the initial window size of a receiver. If multiple sender are sending packets to receiver, how can we ensure the receiving performance of the receiver?

To solve this problem, TCP introduces the concept of slow start.

When sender and receiver have established the TCP three-way handshake. You can start sending packets.

A concept of congested window Congestion window (cwnd) is introduced here.

Cwnd is the largest window size currently acceptable to the server side.

The first cwnd sent after establishing a connection is an initial value, which starts with 1 network segment, which was updated to 4 network segments by RFC 2581 in 1999. In 2013, RFC 6928 expanded this value to 10 network segments.

Taking 10 network segments as an example, let's take a look at the expansion process of cwnd:

Generally speaking, the cwnd increases in multiples, and after receiving the ack, the cwnd will increase from 10 to 20. Until the server side rejects ack.

So back to the conclusion that we talked about earlier, bandwidth is actually not that important.

Why? Consider that in HTTP1.1, client needs to wait for the return of server before making the next request. If the file of your request is small, the request is over before the cwnd is large enough. At this time, the main time spent is the request round-trip time, not the bandwidth size.

Of course, if in HTTP2, because of the establishment of a long connection, slow startup may not exist (not sure, there are different opinions can be put forward).

UDP

UDP (User Datagram Protocol), user Datagram protocol.

The main features and highlights of UDP are not what features it introduces, but those it ignores: no guarantee of message delivery, no guarantee of delivery order, no tracking of connection status, no need for congestion control.

Let's first take a look at the packet of UDP:

NAT

Everyone knows that IPV4 addresses are limited, and IPV4 addresses will soon be used up, so how to solve this problem?

Of course, a permanent solution is IPV6, but IPV6 has been around for so many years and doesn't seem to have really gained popularity.

Is there any solution if you don't use IPV6?

This method is NAT (Network Address Translators).

The principle of NAT is to map the IP and port of the LAN to the IP and port of the NAT device.

NAT maintains a conversion table internally. This allows you to connect to a large number of LAN servers through a NAT IP address and different ports.

So what's wrong with NAT?

The problem with NAT is that the internal client does not know its own extranet IP address, only the intranet IP address.

If you are in the UDP protocol, because UDP is stateless, you need NAT to rewrite the source port, address, and source IP address in each UDP packet.

If the client tells the server its IP address inside the application and wants to establish a connection with the server, it will certainly not be able to do so. Because the public network IP of the client cannot be found.

Even if the public network IP is found, any packet arriving at the IP of the external network of the NAT device must have a destination port, and there must be an entry in the NAT translation table that can translate it to the IP address and port number of the internal host. Otherwise, the connection failure in the following figure may occur.

How to solve it?

The first way is to use a STUN server.

The STUN server is a server with a known IP address. Before the client wants to communicate, go to the STUN server to query its own extranet IP and port, and then use this extranet IP to communicate with the port.

But sometimes UDP packets are blocked by firewalls or other applications. At this point, the repeater technology Traversal Using Relays around NAT (TURN) can be used.

Both parties send the data to the repeater server, and the repeater server is responsible for forwarding the data. Note that this is no longer P2P.

Finally, we have an aggregator protocol called ICE (Interactive Connectivity Establishment):

It is actually a combination of direct connection, STUN and TURN, directly connected when you can, STUN if you can't connect directly, and TURN if you can't use STUN.

The above is the content of this article on "what is the layer 7 protocol of OSI network". I believe we all have a certain understanding. I hope the content shared by the editor will be helpful to you. If you want to know more about the relevant knowledge, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report