Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the TCP protocol interview questions?

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces the relevant knowledge of "what are the TCP protocol interview questions". In the course of the operation of the actual case, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

001. Can you tell me the difference between TCP and UDP?

First of all, summarize the basic differences:

TCP is a connection-oriented, reliable, byte stream-based transport layer protocol.

UDP is a connectionless transport layer protocol. As simple as that, the other features of TCP are gone.

Specifically, compared with UDP, TCP has three core features:

Connection oriented. The so-called connection refers to the connection between the client and the server. Before the two parties communicate with each other, TCP needs a three-way handshake to establish a connection, while UDP has no corresponding process of establishing a connection.

Reliability. TCP spends a lot of effort to ensure the reliability of the connection. In what ways does this reliability reflect? One is stateful, the other is controllable.

TCP records exactly which data is sent, which is received, and which is not received, and ensures that the packets arrive in order, without any error. This is a state.

When aware of packet loss or poor network environment, TCP will adjust its behavior according to the specific situation, control its transmission speed or retransmit. It's controllable.

Accordingly, UDP is stateless and uncontrollable.

Oriented to byte stream. UDP's data transfer is based on datagrams because it only inherits the features of the IP layer, while TCP turns IP packets into byte streams in order to maintain state.

002: tell me about the process of TCP's three-way handshake? Why three times instead of two or four times?

Love simulation take falling in love as an example, the most important thing for two people to be together is to first confirm their ability to love and be loved. Next, we use this to simulate the process of three-way handshake.

The first time:

Man: I love you.

Woman, copy that.

This proves that the man has the ability to love.

The second time:

Woman: I received your love, and I love you, too.

Man, copy that.

OK, the current situation shows that the woman has the ability to love and be loved.

The third time:

Man: I received your love.

Woman, copy that.

It is now possible to ensure that the man has the ability to be loved.

As a result, the ability of both parties to love and be loved is fully confirmed, and the two begin a sweet love.

Of course, the real handshake is bullshit and does not represent my values. The purpose is to make people understand the meaning of the whole handshake process, because the two processes are very similar. Corresponding to the three-way handshake of TCP, it is also necessary to confirm the two abilities of both parties: the ability to send and the ability to receive. So there will be the following three handshakes:

From the beginning, both sides are in a CLOSED state. Then the server starts listening on a port and enters the LISTEN state.

Then the client initiates the connection, sends SYN, and changes itself to the SYN-SENT state.

The server receives, returns SYN and ACK (corresponding to the SYN sent by the client), and becomes SYN-REVD.

After that, the client sends ACK to the server, and it becomes the ESTABLISHED state; after the server receives the ACK, it also becomes the ESTABLISHED state.

Another thing to remind you is that as can be seen from the figure, SYN needs to consume a sequence number, and the corresponding ACK sequence number should be increased by 1 next time. Why? Just remember one rule:

Anything that needs peer confirmation must consume the serial number of the TCP message.

SYN requires peer-to-peer confirmation, while ACK does not, so SYN consumes a serial number and ACK does not.

Why not twice? Root cause: unable to confirm the receiving capacity of the client.

The analysis is as follows:

If it is twice, you now send a SYN message to shake hands, but the packet is stuck in the current network and has not arrived. TCP thought it was a lost packet, so he retransmitted it, and two handshakes established the connection.

There seems to be no problem, but what if the packet stranded in the network arrives at the server after the connection is closed? At this time, because it is a two-way handshake, the server will establish a connection by default as long as it receives and sends the corresponding packet, but now the client has been disconnected.

See the problem, this brings a waste of connection resources.

Why not four times? The purpose of a three-way handshake is to confirm the ability of both parties to send and receive handshakes. How about a four-way handshake?

Of course, 100 times is fine. But in order to solve the problem, three times is enough, and any more is of little use.

Can I carry data during the three-way handshake? You can carry it when you shake hands for the third time. The first two handshakes cannot carry data.

If the first two handshakes can carry data, then once someone wants to attack the server, he only needs to put a large amount of data in the SYN message in the first handshake, then the server is bound to consume more time and memory space to process the data, increasing the risk of the server being attacked.

When the third handshake, the client is already in the ESTABLISHED state, and has been able to confirm that the server's receiving and sending capacity is normal, this time is relatively safe and can carry data.

What happens if you open it at the same time? What will be the state change if both parties send SYN messages at the same time?

This is a possible situation.

The state transition is as follows:

While the sender sends the SYN message to the receiver, the receiver also sends the SYN message to the sender. Two people just got on!

After sending SYN, the status of both changes to SYN-SENT.

After each receives the other's SYN, both states change to SYN-REVD.

The corresponding ACK + SYN is then replied, and after the message is received by the other party, the status of both changes to ESTABLISHED.

This is the state transition when it is turned on at the same time.

003: tell me about the process of TCP waving four times.

Process disassembly

At first, both sides were in a state of ESTABLISHED.

When the client is about to disconnect, send a FIN message to the server. The location in the TCP message is as follows:

The client becomes FIN-WAIT-1 after sending. Note that at this time, the client also becomes half-close (semi-closed), that is, it cannot send a message to the server and can only receive it.

After receiving it, the server confirms to the client and becomes the CLOSED-WAIT state.

The client receives the confirmation from the server and becomes a FIN-WAIT2 state.

Then, the server sends FIN to the client and enters the LAST-ACK state by itself

After the client receives the FIN from the server, it changes into the TIME-WAIT state and sends the ACK to the server.

Note that at this time, the client needs to wait long enough, specifically, 2 MSL (Maximum Segment Lifetime, maximum message lifetime). If the client does not receive a resend request from the server during this period, it means that the ACK has arrived successfully and the wave ends, otherwise the client will resend the ACK.

What's the point of waiting for 2MSL if you don't wait?

If the client does not wait, the client will run away directly. When the server still has a lot of packets to send to the client and is still on the road, and if the port of the client happens to be occupied by the new application, then the useless data packet will be received, resulting in packet confusion. So the safest thing to do is to wait until the packets from the server are dead before starting a new application.

So, isn't it enough to say one MSL like that? why wait for 2 MSL?

1 MSL ensures that the last ACK message of the active shutdown party in the four waves can finally reach the opposite end.

1 MSL ensures that the peer does not receive an ACK retransmitted FIN message to arrive, which is the point of waiting for 2MSL.

Why four waves instead of three? Because when the server receives the FIN, it often does not return the FIN immediately. It must wait until all the messages on the server have been sent before it can send the FIN. Therefore, sending an ACK first indicates that you have received the FIN of the client, and the FIN will be sent after a delay. This resulted in four waves.

What's the problem if you wave three times? This means that the server merges the sending of ACK and FIN into a wave. A long delay at this time may cause the client to mistakenly think that the FIN has not reached the client, thus causing the client to constantly resend the FIN.

What happens if you shut it down at the same time? How does the state change if both the client and the server send FIN at the same time? As shown in the figure:

004: talk about the relationship between semi-connection queues and SYN Flood attacks

Before the three-way handshake, the server state changes from CLOSED to LISTEN, and two queues are created internally: the semi-connected queue and the fully connected queue, namely the SYN queue and the ACCEPT queue.

Semi-connection queue when the client sends SYN to the server, and the server receives the reply to ACK and SYN, the status changes from LISTEN to SYN_RCVD, and the connection is pushed into the SYN queue, that is, the semi-connection queue.

Full connection queue when the client returns ACK and the server receives it, the three-way handshake is completed. At this point, the connection is waiting to be removed by the specific application, and before it is removed, it is pushed into another queue maintained by TCP, the full connection queue (Accept Queue).

SYN Flood attack principle SYN Flood is a typical DoS/DDoS attack. The principle of the attack is very simple, which is to use the client to forge a large number of non-existent IP addresses in a short time, and send SYN crazily to the server. For the server, there are two dangerous consequences:

If a large number of SYN packets are processed and the corresponding ACK is returned, a large number of connections are bound to be in the SYN_RCVD state, thus filling the entire semi-connection queue and unable to process normal requests.

Because the IP does not exist, the server can not receive the ACK of the client for a long time, which will cause the server to constantly resend the data until the resources of the server are exhausted.

How to deal with SYN Flood attacks?

Increase SYN connections, that is, increase the capacity of semi-connected queues.

Reduce the number of SYN + ACK retries and avoid a large number of timeout retransmissions.

Using SYN Cookie technology, the server does not allocate the connection resources immediately after receiving the SYN, but calculates a Cookie according to the SYN and replies to the client with the second handshake. When the client replies to the ACK, it takes this cookie value, and the server verifies that the Cookie is legal before allocating the connection resources.

005: introduce the fields of the TCP header

The header structure is as follows (in bytes):

Please keep this picture in mind!

How do source ports and destination ports uniquely identify a connection? The answer is the quad of TCP connections-- source IP, source port, destination IP, and destination port.

Then why does the TCP message have no source IP and destination IP? This is because IP is already processed at the IP layer. TCP only needs to record the ports of both.

The serial number, or Sequence number, refers to the serial number of the first byte of this paragraph.

As you can see from the figure, the sequence number is an unsigned integer with a length of 4 bytes, or 32 bits, representing a range of 0 ~ 2 ^ 32-1. If you reach the maximum value, cycle to 0.

Serial numbers play two roles in TCP communication:

The initial sequence numbers of each other are exchanged in SYN messages.

Ensure that packets are assembled in the correct order.

ISN is Initial Sequence Number (initial sequence number). During the three-way handshake, both parties use SYN messages to exchange each other's ISN.

ISN is not a fixed value, but adds one every 4 ms, and the overflow returns to 0, which makes it difficult to guess ISN. Then why did you do it?

If the ISN is predicted by the attacker, knowing that both the source IP and the source port number are easy to forge, when the attacker guesses the ISN and directly forges a RST, the connection can be forced to close, which is very dangerous.

The dynamic growth of ISN greatly increases the difficulty of guessing ISN.

The confirmation number is ACK (Acknowledgment number). It is used to inform the other party that the next expected sequence number has been received, and all bytes smaller than ACK have been received.

The most common tag bit is SYN,ACK,FIN,RST,PSH.

SYN and ACK have been mentioned above, and the last three explanations are as follows: FIN: Finish, indicating that the sender is ready to disconnect.

RST: Reset, which is used to force disconnection.

PSH: that is, Push, which tells the other party that these packets should be delivered to the upper application immediately after they are received, and cannot be cached.

The window size takes up two bytes, or 16 bits, but it's actually not enough. Therefore, TCP introduces the option of window scaling as the scale factor of window scaling, which ranges from 0 to 14, and the scale factor can expand the value of the window to the original 2 ^ n power.

Checksum occupies two bytes to prevent packet damage during transmission. If a message with a checksum error is encountered, TCP discards it directly and waits for retransmission.

The optional format is as follows:

The common options are as follows:

TimeStamp: TCP timestamp, which is described in more detail later.

MSS: refers to the maximum message segment allowed by the TCP to receive from the other party.

SACK: select the confirmation option.

Window Scale: window zoom option.

006: talk about the principle of TCP fast opening (TFO)

The first section talks about the TCP three-way handshake, some people may say, every three-way handshake is so troublesome! Can you optimize it a bit?

My pleasure. Today, let's talk about the optimized TCP handshake process, which is the principle of TCP TCP Fast Open (TFO).

The optimization process goes like this. Remember the SYN Cookie we mentioned when we talked about SYN Flood attacks? This Cookie is not the Cookie of the browser, and you can also implement TFO with it.

TFO process

In the first round of three-way handshake, the client sends SYN to the server, and the server receives it.

Pay attention! Instead of replying to SYN + ACK immediately, the server calculates a SYN Cookie, puts the Cookie in the Fast Open option of the TCP message, and then returns it to the client.

The client gets the Cookie value and caches it. After the normal completion of the three-way handshake.

This is the process of the first three-way handshake. The last three handshakes are different!

The following three-way handshake is in the following three-way handshake. The client sends the previously cached Cookie, SYN and HTTP requests (yes, you read it correctly) to the server. The server verifies the validity of the Cookie and discards it directly if it is illegal. If it is legal, it returns SYN + ACK normally.

The point is, now the server can send a HTTP response to the client! This is the most significant change, the three-way handshake has not yet been established, just to verify the legitimacy of the Cookie, you can return the HTTP response.

Of course, the ACK of the client has to be passed normally, otherwise how to call it a three-way handshake.

The process is as follows:

Note: the ACK of the client's final handshake does not have to wait until the server's HTTP response arrives, and there is no relationship between the two processes.

The advantage of TFO TFO lies not in the three-way handshake with the first round, but in the handshake behind. After getting the Cookie of the client and verifying it, you can directly return the HTTP response, making full use of the time of 1 RTT (round-trip delay) for data transmission in advance, which is still a big advantage.

007: can you talk about the role of timestamps in TCP messages?

Timestamp is an optional header of the TCP message, which takes up a total of 10 bytes in the following format:

Kind (1 byte) + length (1 byte) + info (8 bytes) where kind = 8, length = 10, info consists of two parts: timestamp and timestamp echo, each occupying 4 bytes.

So what are these fields for? What problems are they used to solve?

Next, let's sort it out one by one. TCP timestamps mainly solve two major problems:

Calculate round trip delay RTT (Round-Trip Time)

Prevent the rewinding of serial numbers

When calculating round-trip latency RTT, when there is no timestamp, the problems encountered in computing RTT are shown below:

If you take the first package as the starting time, there will be the problem on the left. The RTT is obviously too large, and the start time should be the second.

If the second packet is used as the start time, it will lead to the problem on the right. The RTT is obviously too small, and the start time should be the first one.

In fact, whether the start time is based on the first or the second time, it is not accurate.

Then the introduction of timestamp at this time is a good solution to this problem.

For example, now a sends a message S1 to b, and b replies a message S2 containing ACK to a, then:

When step 1: an is sent to b, what is stored in timestamp is the kernel time ta1 when host a sends it.

When step 2: B replies to a S2 message, the time tb of b host is stored in timestamp, and the timestamp echo field is the ta1 parsed from S1 message.

Step 3: an after receiving the S2 message of b, the kernel time of a host is ta2, and you can get ta1 in the timestamp echo option of S2 message, which is the initial sending time of S2 message. Then we directly use ta2-ta1 to get the value of RTT.

Prevent the serial number winding problem. Now let's simulate this problem.

The serial number actually ranges from 0 to 2 ^ 32-1. For the sake of demonstration, let's narrow this range, assuming that the range is 0-4, then we will return to 0 when we reach 4.

How many times the packet sent byte corresponding to the serial number status 10 ~ 10 ~ 1 successfully received 21 ~ 21 ~ 2 stranded in the network 32 ~ 32 ~ 3 successfully received 43 ~ 43 ~ 4 successfully received 54 ~ 50 ~ 1, the serial number starts from 0 65 ~ 61 ~ 2?

Suppose that at the 6th time, the packets that were still stranded in the network come back, then there are two packets with serial numbers 1-2. How can you tell who is which? At this time, there is the problem of serial number winding.

Then using timestamp can solve this problem very well, because each time the packet is sent, the kernel time of the sending machine is recorded in the message, so even if the serial number of the two packets is the same, the timestamp can not be the same, so that the two packets can be distinguished.

008: how is the timeout retransmission time of TCP calculated?

TCP has a timeout retransmission mechanism, that is, it retransmits the packet without waiting for a reply for a certain period of time.

So how is this retransmission interval calculated?

Let's discuss this problem today.

This retransmission interval is also called Retransmission TimeOut (RTO for short), and its calculation is closely related to the RTT mentioned in the previous section. Here we will introduce two main methods, one is the classical method, the other is the standard method.

The classical method introduces a new concept-SRTT (smooth round trip time), and does not produce a new RTT. Update the SRTT according to a certain algorithm. Specifically, the calculation method is as follows (the initial value of SRTT is 0):

SRTT = (α * SRTT) + ((1-α) * RTT), where α is the smoothing factor, the recommended value is 0.8, and the range is 0.8 ~ 0.9.

With SRTT, we can calculate the value of RTO:

RTO = min (ubound, max (lbound, β * SRTT)) β is the weighting factor, generally 1.32.0, lbound is the lower bound and ubound is the upper bound.

In fact, the process of this algorithm is still very simple, but it also has some limitations, that is, it can perform well in places where RTT is stable, but not in places where RTT changes greatly, because the range of smoothing factor α is 0.8 ~ 0.9. the influence of RTT on RTO is too small.

Standard method in order to solve the problem that the classical method is insensitive to RTT changes, the standard method, also known as Jacobson / Karels algorithm, is introduced.

There are three steps.

Step 1: calculate SRTT. The formula is as follows:

SRTT = (1-α) * SRTT + α * RTT note that the value of α at this time is different from that in the classical method, and the recommended value is 1 to 8, that is, 0.125.

Step 2: calculate the intermediate variable RTTVAR (round-trip time variation).

The recommended value of RTTVAR = (1-β) * RTTVAR + β * (| RTT-SRTT |) β is 0.25. This value is the highlight of this algorithm, that is, it records the difference between the latest RTT and the current SRTT, providing us with a hand to perceive changes in RTT later.

Step 3: calculate the final RTO:

RTO = μ * SRTT + ∂ * RTTVAR μ recommended value is 1, ∂ recommended value is 4.

This formula adds the latest RTT and its offset on the basis of SRTT, so it has a good perception of the change of RTT. Under this algorithm, the difference between RTO and RTT is closer.

009: can you tell me something about the flow control of TCP?

For the sender and receiver, TCP needs to put the transmitted data into the sending cache and the received data into the receiving buffer.

What the flow control request needs to do is to control the transmission of the sender by receiving the size of the buffer area. If the other party's receive cache is full, it can no longer be sent.

To understand flow control, we first need to understand the concept of sliding window.

TCP sliding window TCP sliding window is divided into two types: send window and receive window.

The sliding window structure of the sending end of the sending window is as follows: it contains four parts:

Sent and confirmed

Sent but not confirmed

Not sent but can be sent

It has not been sent and cannot be sent

There are some important concepts that I marked in the diagram: the send window is the framed range in the diagram. SND is send, WND is window, and UNA is unacknowledged, indicating that it has not been confirmed, and NXT is next, indicating the location of the next transmission.

The structure of the receiving end of the receiving window is as follows:

REV, or receive,NXT, indicates the location of the next receive, and WND represents the size of the receive window.

Flow control process here we do not need too complex examples, with the simplest to come back to simulate the flow control process, easy for everyone to understand.

First, the two sides shake hands three times and initialize their respective window sizes, both of which are 200 bytes.

If the current sender sends 100 bytes to the receiver, then for the sender, of course, SND.NXT will have to move 100 bytes to the right, which means that the current available window is reduced by 100 bytes, which is understandable.

Now the 100 have reached the receiver and are placed in the buffer queue of the receiver. But at this time, due to a large load, the receiver can not handle so many bytes, can only handle 40 bytes, the remaining 60 bytes are left in the buffer queue.

Note that at this time, the processing capacity of the receiver is not enough, you send me fewer points, so the receiving window of the receiver should be reduced, specifically, 60 bytes, from 200 bytes to 140 bytes, because there are 60 bytes left in the buffer queue that have not been removed by the application.

Therefore, the receiver will put a reduced sliding window on the header of the ACK, and the sender adjusts the size of the sending window to 140 bytes accordingly.

At this point, for the sender, the part that has been sent and acknowledged is increased by 40 bytes, that is, the SND.UNA moves 40 bytes to the right, and the sending window shrinks to 140 bytes.

This is the process of flow control. No matter how many rounds there are, the whole process and principle of control are the same.

010: can you talk about the congestion control of TCP?

The flow control mentioned in the previous section occurs between the sender and the receiver, and does not take into account the impact of the entire network environment. If the current network is particularly poor and easy to lose packets, then the sender should pay attention to it. And this is the problem that congestion control needs to deal with.

For congestion control, TCP maintains two core states for each connection:

Congestion window (Congestion Window,cwnd)

Slow start threshold (Slow Start Threshold,ssthresh)

The algorithms involved are as follows:

Slow start

Congestion avoidance

Fast retransmission and fast recovery

Next, we will disassemble these states and algorithms one by one. First, let's start with the congestion window.

Congestion window congestion window (Congestion Window,cwnd) refers to the amount of data that one can still transmit at present.

So the concept of receiving window was introduced earlier. What's the difference between the two?

The receiving window (rwnd) is the limit given by the receiver.

Congestion window (cwnd) is a limitation on the sender

Limit who?

The limit is the size of the send window.

With these two windows, how to calculate the sending window?

Send window size = min (rwnd, cwnd) take the smaller of both. Congestion control is to control the change of cwnd.

When slow start starts to transfer data, you don't know whether the current network is stable or congested. If you do it too aggressively and send packets too quickly, you will lose packets crazily, resulting in an avalanche network disaster.

Therefore, the first step of congestion control is to use a conservative algorithm to slowly adapt to the whole network, which is called slow start. The operation process is as follows:

First, shake hands three times, and both sides announce the size of their receiving window.

Both sides initialize their own congestion window (cwnd) size

At the beginning of the transmission, for every ACK received by the sender, the congestion window size is increased by 1, that is, every RTT,cwnd passed is doubled. If the initial window is 10, then after the first round of 10 messages are transmitted and the sender receives the ACK, the cwnd changes to 20, the second round to 40, the third round to 80, and so on.

Is it just going to double indefinitely? Of course not. Its threshold is called slow start threshold, when cwnd reaches this threshold, it is like stepping on the brake, don't go up so fast, old man, hold first!

After reaching the threshold, how to control the size of the cwnd?

This is what congestion avoids.

Congestion avoidance used to add 1 for every ACK,cwnd received, but now the threshold has been reached, and cwnd can only add this: 1 / cwnd. If you calculate carefully, if you receive cwnd ACK after a round of RTT, the cwnd of the final congestion window will increase by a total of 1.

In other words, in the past, when a RTT came down, the cwnd doubled, but now the cwnd just increases by 1.

Of course, slow start and congestion avoidance work together and are integrated.

Fast retransmission and fast recovery

Fast retransmission in the process of TCP transmission, if packet loss occurs, that is, when the receiver finds that the data segment does not arrive in sequence, the receiver processes the ACK before it is sent repeatedly.

For example, if the 5th packet is lost, even if the 6th or 7th packet arrives at the receiving end, the receiver will return the ACK of the 4th packet. When the sender receives three duplicate ACK, it realizes that the packet has been lost, so it retransmits immediately without having to wait for the time of a RTO to retransmit.

This is fast retransmission, which solves the problem of whether retransmission is needed or not.

Selective retransmission then you may ask, since you want to retransmit, only the 5th packet or the 5th, 6th and 7th packets will be retransmitted?

Of course, the 6th and 7th have already arrived, and the designer of TCP is not stupid, why pass it on already? Simply record which packets have arrived and which have not, and retransmit them pertinently.

After receiving the message from the sender, the receiver replies with an ACK message, so the SACK attribute can be added to the optional header of the message to inform the sender which range of datagrams have been received through left edge and right edge. Therefore, even if the fifth packet is lost, when the 6th and 7th packets are received, the receiver will still tell the sender that the two packets have arrived. Retransmit the packet before the fifth packet arrives. This process, also known as selective retransmission (SACK,Selective Acknowledgment), solves the problem of how to retransmit.

Quick recovery of course, after receiving three repeated ACK, the sender found that the packet was lost and felt that the network was already a little congested, so he would enter the stage of rapid recovery.

At this stage, the sender changes as follows:

Congestion threshold reduced to half of cwnd

The size of the cwnd becomes the congestion threshold

Linear increase of cwnd

These are the classic algorithms of TCP congestion control: slow start, congestion avoidance, fast retransmission and fast recovery.

011: can you tell me about Nagle algorithm and delayed confirmation?

Nagle algorithm imagine a scenario in which the sender keeps sending very small packets to the receiver, sending only 1 byte at a time, so it takes 1000 times to send 1 thousand bytes. This kind of frequent transmission is problematic, not only the delay consumption of transmission, but also the time-consuming of transmission and confirmation itself. frequent transmission and reception brings huge delay.

To avoid the frequent sending of small packets, this is what Nagle algorithm does.

Specifically, the rules of the Nagle algorithm are as follows:

When you send data for the first time, you don't have to wait, even the packets of 1byte are sent immediately, and then you can send them if you meet one of the following conditions: before the packet size reaches the maximum segment size (Max Segment Size, that is, MSS), the ACK of all packets has been received.

Delay confirmation imagine a scenario where I receive a packet from the sender and then a second packet in a very short time, should I reply one by one, or wait a little while and merge the ACK of the two packets and reply together?

What delayed ack does is the latter, delaying slightly, then merging the ACK and finally replying back to the sender. TCP requires that the delay of this delay must be less than 500ms, and the general operating system implementation will not exceed 200ms.

However, the main thing to do is that there are some scenarios that cannot be delayed. Reply as soon as you receive them:

Received a message larger than one frame and need to resize the window

TCP is in quickack mode (set by tcp_in_quickack_mode)

Disordered package was found.

What happens when the two are used together? The former means delayed transmission, while the latter means delayed reception, which will cause greater delay and performance problems.

012. How to understand TCP's keep-alive?

Everyone has heard of http's keep-alive, but there is also a keep-alive mechanism at the TCP level, which is different from the application layer.

Imagine a scenario in which a party fails a connection due to a network failure or downtime. Since TCP is not a polling protocol, the peer-to-peer connection failure is unknown until the next packet arrives.

At this point, keep-alive appears, and its function is to detect whether the connection on the opposite side has failed.

Under Linux, you can view the relevant configuration as follows:

Sudo sysctl-a | grep keepalive// detects every 7200 seconds net.ipv4.tcp_keepalive_time = 7200bp / a maximum of 9 packets net.ipv4.tcp_keepalive_probes = 9pm / interval retransmission interval of 75 snet.ipv4.tcp_keepalive_intvl = 75 what are the TCP protocol interview questions? thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report