In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly introduces "what is RTMP protocol". In daily operation, I believe many people have doubts about what is RTMP protocol. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts about "what is RTMP protocol"! Next, please follow the editor to study!
I. background
Real-time message transfer Protocol (Real-Time Messaging Protocol) is the main protocol for live broadcasting at present. It is an application layer private protocol designed by Adobe for providing audio and video data transmission services between Flash players and servers. RTMP protocol is the basic LVB push and pull protocol shared by all cloud manufacturers' LVB business. With the development of domestic LVB industry and the arrival of 5G era, basic understanding of RTMP protocol is also a basic skill that we programmers must master.
This paper mainly describes the basic ideas and core concepts of RTMP, supplemented by the source code analysis of livego, to learn the core knowledge points of RTMP protocol with you.
II. Characteristics of RTMP protocol
The main features of RTMP protocol are: multiplexing, subpacket and application layer protocol. These characteristics are described in detail below.
2.1 Multiplexing
Multiplexing (multiplex) means that the signal sender transmits multiple signals simultaneously through one channel, and then the signal receiver combines the multiple signals transmitted in one channel to form independent and complete signal information, so as to use the communication line more effectively.
In short, on a TCP connection, the Message to be transmitted is divided into one or more Chunk, and multiple Chunk of the same Message form the ChunkStream. At the receiving end, the Chunk in the ChunkStream can be combined together to restore a complete Message. This is the basic idea of multiplexing.
The figure above is a simple example. Suppose you need to pass a 300-byte Message, we can split it into three Chunk, and each Chunk can be divided into Chunk Header and Chunk Data. In Chunk Header, we can mark some basic information in the Chunk. For example, Chunk Stream Id and Message Type;Chunk Data are the original information. In the figure above, the Message is divided into 128 / 128 / 44 / 300, so that the Message can be transmitted completely.
The formats of Chunk Header and Chunk Data will be described in detail later.
2.2 subcontracting
The second major feature of RTMP protocol is subpacket. Compared with RTSP protocol, subpacket is a feature of RTMP. Different from ordinary business application layer protocols (such as RPC protocol), in multimedia network transmission cases, the vast majority of multimedia transmission audio and video packets are relatively large. Large packet transmission based on TCP, a reliable transport protocol, is likely to block the connection, resulting in higher priority information can not be transmitted, packet transmission is to solve this problem. The specific subcontracting format is described below.
2.3 Application layer Protocol
The last feature of RTMP is the application layer protocol. The RTMP protocol is implemented by default based on the transport layer protocol TCP, but in the official documents of RTMP, only the standard data transmission format description and some specific protocol format description are given, but there is no specific official complete implementation, which gives rise to many other related industry implementations, such as RTMP over UDP and other related private adaptation protocols, giving us more room for scalability. It is convenient for everyone to solve the LVB delay and other problems existing in native RTMP.
III. Analysis of RTMP protocol
As an application layer protocol, like other private transport protocols (such as RPC protocol), RTMP also has some specific code implementations, such as nginx-rtmp, livego, and srs. In this paper, we choose livego, an open source live broadcast server based on go language, to analyze the main process at the source code level, to learn deeply the implementation of the core process of RTMP push and pull stream, and to help you have an overall understanding of RTMP protocol.
Before analyzing the source code, we will help you have a basic understanding of the format of RPC protocol by analogy. First of all, we can look at a relatively simple but practical RPC protocol format, as shown in the following figure:
We can see that this is a data transfer format used during RPC calls, and the fundamental purpose of using this format is to solve the problem of "sticking and unpacking".
The format of the RPC protocol in the figure is briefly described as follows: first, the magic number is represented by 2 bytes, MAGIC, marking that the protocol is an identification that can be recognized by the peer, and if the received 2 bytes are not 0xbabe, the packet is discarded directly; the second sign occupies 1 byte, the lower 4 bits represent the message type request/response/heartbeat, and the higher 4 bits represent the serialization type such as json,hessian,protobuf,kyro, etc. The third status occupies one byte to represent the status bit; then 8 bytes are used to represent the called requestId, usually a low 48 bits (2 to the 48th power) is enough to represent requestId; then a 4-byte fixed-length body size is used to represent Body Content, which can quickly parse the complete request object of the RPC message Message.
By analyzing the above simple RPC protocol, we can find a good idea, that is, the most efficient use of bytes, even with the smallest byte array, to transmit the most data information. A small byte can bring a lot of information. After all, a byte has 64 different changes. In the network, if we only need to use one byte to transmit a lot of useful information, then we can use extremely limited resources to maximize the use of resources. The official document of RTMP appeared in 2012, although the implementation of RTMP protocol is very complex and even a little bloated from the current point of view, it can have more advanced ideas in 2012, which is indeed an example for us to learn from.
In today's era when WebRTC protocol is rampant, we can also see the shadow of RTMP from the design and implementation of WebRTC. The above RPC protocol can be regarded as a simplified design with a similar design concept to RTMP.
3.1 RTMP Core concept statement
Before analyzing the RTMP source code, we first make a specific description of several core concepts in the RTMP protocol, which is convenient for us to have a basic understanding of the whole RTMP protocol stack macroscopically, and during the later source code analysis, we will also help us to analyze the related principles more intuitively by grabbing the packet.
First of all, like the RPC protocol format just now, the entity object that RTMP actually transmits is Chunk, and a Chunk consists of Chunk Header and Chunk Body, as shown in the following figure.
3.1.1Chunk Header
The Chunk Header part is quite different from the RPC protocol we mentioned earlier, mainly because the length of the Chunk Header of the RTMP protocol is not fixed, so why is it not fixed? In fact, it is Adobe to save the cost of data transmission. From the example of splitting a 300byte Message into three Chunk, we can see that multiplexing actually has an obvious disadvantage, that is, we need a Chunk Header to mark the basic information of the Chunk, which actually has the overhead of additional byte stream transmission during transmission. Therefore, in order to ensure the minimum number of bytes transferred, we need to constantly squeeze the size of the Header of RTMP to ensure that the size of Header reaches the minimum, so as to achieve the highest transmission efficiency.
First of all, let's take a look at the Basic Header part of the Chunk Header. The length of the Basic Header is not fixed. It can be 1 byte, 2 bytes, or 3 bytes, depending on Chunk Stream Id (abbreviation: csid).
The range of csid supported by the RTMP protocol is 2: 65599. 0 and 1 are protocol retention values and cannot be used by users. Basic Header contains at least 1 byte (8 bits lower), and its length is determined by that byte, as shown in the following figure. The value of the high 2 bits of this byte for fmt,fmt determines the format of Message Header, which will be discussed later. The lower 6 bits of this byte is the value of csid. When the value of the lower 6 bits csid is 0, the real csid value is too large to be represented by 6 bit, and a subsequent byte is needed. When the csid value of the lower 6 bits is 1, the real csid value is too large to be expressed by 14 bit, and a subsequent byte is needed. Therefore, the length of the entire Basic Header does not seem to be fixed, depending entirely on the value of the lower 6-bit csid of the first byte.
In practical application, not so much csid is used, that is to say, in general, the length of Basic Header is one byte, and the value range of csid is 2 / 63.
Having said so much just now, I only talked about Basic Header, and Basci Header is only one of the components of Chunk Header. The author of the RTMP protocol, who likes to toss around, designed the Chunk Header module of RTMP to be of dynamic size, in short, to save transmission space. What is easy to understand here is that the length of Chunk Message Header is also divided into four cases, which is determined by the value of fmt mentioned earlier.
The four formats of Message Header are shown in the following figure:
When fmt is 0, Message Header takes up 11 bytes (note that the 11 bytes here do not include the length of Basic Header), which consists of a 3-byte timestamp,3-byte length, a message length,1-byte length and a message type Id,4-byte message stream Id.
Where timestamp is an absolute timestamp, indicating the time when the message was sent; message length represents the length of the chunk body; message type id represents the message type, which will be described later; message stream id is the unique ID of the message. It should be noted that if the absolute timestamp of this message is greater than 0xFFFFFF, the time is too large to be represented by 3 bytes, which needs to be represented by extended timestamp (Extended Timestamp). The length of extended timestamp is 4 bytes, which is placed between Chunk Header and Chunk Body by default.
When fmt is 1, Message Header takes up 7 bytes, which is one less message stream id than the previous 11-byte chunk header. This chunk is the chunk stream id before reuse, which is generally used for variable-length message structures.
When fmt is 2, Message Header takes up only 3 bytes, which only contains three bytes of timestamp. Compared with the previous ones, there is both less stream id and less message length. This less message length is generally used for fixed-length messages that require correction time (such as audio data).
When fmt is 3, Message Header is not included in Chunk Header. Generally speaking, when unpacking, the Message message of a complete RTMP will be split into the first Chunk message with a fmt of 0, and the subsequent message will be split into a message with a fmt of 3. This way is that the first Chunk comes with the most complete Chunk message information, and the Header of the subsequent Chunk information will be smaller, so it is easier to implement and the compression ratio is better. Of course, if the first Message is sent successfully and the second Message is sent again, the first Chunk of the second Message is set to Chunk of type fmt 1, and then the fmt of the Chunk of that Message is 3, so that messages can be distinguished.
3.1.2 Chunk Body
Just now I spent a lot of time describing Chunk Header, so let's give a brief description of Chunk Body. Compared with Chunk Header, Chunk Body is relatively simple, does not have so many variable length controls, and has a relatively simple structure. The data in this is the data with real business meaning, and the length is 128 bytes by default (it can be changed through negotiation through the set chunk size command). The packet organization format is usually audio / video data in AMF or FLV format (excluding FLV TAG headers). The data composition of the AMF organizational structure is shown in the following figure. The FLV format is not described in detail in this article. If you are interested, you can read the official FLV documentation.
3.1.3 AMF
AMF (Action Message Format) is a binary data serialization format similar to JSON,XML. Adobe Flash and the remote server can communicate with the data in AMF format.
The specific format of AMF is actually very similar to the data structure of Map, that is, on the basis of KV key-value pair, there is an extra length with Value value in the middle. The result of AMF is basically shown in the following figure. Sometimes the len field is empty, which is determined by type. For example, if we transmit data in AMF format of number type, then we can ignore the len field. Because our default field of number type occupies 8 bytes, we can ignore it.
For example, if AMF transmits data of type 0x02 string, the length of len occupies 2 bytes by default, because 2 bytes is enough to represent the maximum length of subsequent value. And so on, of course, there are times when the values of len and value don't exist, such as when passing 0x05 and passing null, we don't need both len and value.
The following is a list of some commonly used type tables for AMF. For more information, please see the official documentation.
We can grab packets through WireShark and actually experience the specific AMF0 format.
As shown in the figure above, this is a very typical string structure of AMF0 type. There are currently two main versions of AMF, AFM0 and AMF3, and AMF0 still occupies a mainstream position in the current practical usage scenarios. So what's the difference between AMF0 and AMF3? when the client sends Chunk Data data in AMF format to the server, how does the server know AMF0 or AMF3 when it receives the information? In fact, RTMP uses message type id to distinguish in Chunk Header, message type id equals 20 when messages are encoded with AMF0, and message type id equals 17 when AMF3 encoding is used.
3.1.4 Chunk & Message
First of all, to summarize the relationship between Chunk and Message in one sentence, a Message is composed of multiple Chunk, multiple Chunk like Chunk Stream id is called Chunk Stream, and the receiver can remerge and parse into a complete Message. Compared with RPC messages, RTMP has many message types. In the final analysis, the RPC message types mentioned above are request,response and heartbeat, but the message types of RTMP protocol are relatively rich. RTMP messages are mainly divided into the following three types: protocol control messages, data messages and command messages.
* * Protocol control message: * * Message Type ID = 1: 6, which is mainly used for control within the protocol.
* * data message: * * Message Type ID = 8 9
188: Audio audio data
9: Video Video data 1
8: Metadata includes audio and video coding, video width, high audio and video metadata.
Command message Command Message (20,17): there are mainly two types of messages: NetConnection and NetStream, which have multiple functions respectively. The call of this message can be understood as a remote function call.
The overview figure is as follows, which will be described in detail in the section on source code parsing, in which the coloring part is commonly used messages.
3.2 Core implementation process
The learning of network protocol is a boring process. We try to describe the core flow of RTMP protocol vividly, including handshake, connection, createStream, push and pull, by combining the original text of RTMP protocol and the way of grabbing RTMP packets. The basic environment for all the packet capture data in this section is: livego as a RTMP server (service port is 1935), OBS as a push application, and VLC as a pull application.
As an application layer protocol parsing, first of all, we should pay attention to the grasp of the main flow. For every RTMP server, every push and pull is a network link at the code level. For each connection, we have to deal with the corresponding process. We can see that as shown in the source code in livego, there is a handleConn method, as the name implies. Is used to deal with each connection, according to the main process, divided into the first part of the handshake, the second core module according to the RTMP packet protocol, Chunk header and Chunk body parsing, and then according to the parsed Chunk header and Chunk body to do specific processing.
You can see the above code block, there are mainly two core methods: one is HandshakeServer, which mainly deals with handshake logic; the other is the ReadMsg method, which mainly deals with reading Chunk header and Chunk body information.
3.2.1 part I-handshake (Handshake)
The process of RTMP handshake is described in detail in Section 5.2.5 of the original protocol, as shown below:
At first glance, the process may seem a little complicated. So, let's use WireShark to grab the bag and take a look at the process as a whole.
The Info of WireShark grab package can interpret the meaning of RTMP package for us. As can be seen from the following figure, the handshake mainly involves three packages. Package 16 is that the client sends C0 and C1 messages to the server, packet 18 is that the server sends S0Magi S1 and S2 messages to the client, and packet 20 is that the client sends C2 messages to the server. In this way, the client and server complete the handshake process.
As can be seen from the WireShark packet grab, the handshake process is still very simple, somewhat similar to the TCP three-way handshake process, so in terms of actual packet capture, it is still somewhat different from that described in Section 5.2.5 of the RTMP protocol, and the overall process becomes very concise.
Now you can look back at the more complex handshake flowchart above. In the figure, the client and server are divided into four states: uninitialized, version number sent, ACK sent, handshake completed.
Uninitialized: there is no communication phase between client and server
Sent version number: C0 or S0 sent
ACK sent: C2 or S2 sent
Handshake completed: S2 or C2 received.
The RTMP protocol specification does not limit the order of dead C0Magol C1 Magi C2 and S0MagneS1MagneS2, but it does make the following rules:
The client must receive S1 from the server before sending C2.
The client must receive the S2 from the server before sending other data.
The server must receive the C0 from the client before sending S0 and S1.
The server must receive C1 from the client before sending S2.
The server must receive C2 from the client before sending other data.
From the WireShark bag grab analysis, we can see that the whole handshake process does follow the above regulations. Now the question is, what on earth are these messages, C0Query C1, C2JEI, S0MenS0, S1 and S2? In fact, their data formats are clearly defined in the RTMP protocol specification.
C0 and S0 are 1 byte long, and the message specifies the RTMP version number. The value range is 0255. we just need to know that 3 is what we need. If you are interested in the meaning of other values, you can read the original agreement.
C1 and S1Suzhou 1536 bytes long, composed of timestamp + zero + random data, the middle package of handshake process.
C2 and S2 are 1536 bytes long and consist of timestamp + timestamp 2 + random data return. It is basically the echo data of C1 and S1. In general, in the implementation, it will make S2 = C1 C2 = S1.
Let's use the livego source code to enhance our understanding of the handshake process.
So far, the simplest handshake process is over. We can see that the whole handshake process is relatively clear, and the processing logic is relatively simple and easy to understand.
3.2.2 part II-Information Exchange
3.2.2.1 parsing Chunk information of RTMP protocol
After shaking hands, it is necessary to start to do the connection and other related things to deal with, and then do this information processing, you must first sharpen the tools if you want to do a good job.
We first have to parse Chunk Header and Chunk body in accordance with the specifications of the RTMP protocol, convert the byte packet data transmitted through the network into information processing that we can identify, and then deal with the corresponding process according to these identifiable information data, this piece is the key core of source code analysis, involving a lot of knowledge points, we can look at it together with the above, it is convenient for you to understand the core logic of ReadMsg.
The logic of the above code block is very clear, mainly reading each conn connection, carrying on the corresponding coding and decoding, obtaining one by one Chunk, and merging the Chunk of the same ChunkStreamId again into the corresponding ChunkStream, and finally a complete ChunkStream is Message.
This code is relatively close to the chunkstreamId knowledge introduced in our previous theoretical knowledge. We can look at it together. In your mind, you should pay attention to a conn connection, which will transfer multiple Message, such as connecting Message,createStreamMessage, and so on. Each Message is Chunk Stream, that is, multiple Chunk with the same csid, so the authors of livego use data structures like map for storage, and key is csid,value is chunkstream. In this way, all the information sent to the rtmp server can be saved.
The specific logical implementation of the readChunk code is divided into the following parts:
1) the amendment of csid, as for the theoretical part referring to the above logic, this is actually the processing of basic header.
2) Chunk Header parses according to the value of format, which has been introduced in the theoretical part above, and there are also specific comments below. There are two technical points that need to be noted: the first is the handling of timestramp timestamps, and the second is the line of code chunk.new (pool), which also needs everyone's attention. The code notes are also relatively clear.
3) the reading processing of Chunk Body. As mentioned in the theory section above, when fmt is 0 in Chunk header, there will be a message length field, which controls the size of Chunk Body. According to this field, we can easily read Chunk body information. The overall logic is as follows.
So far, we have successfully parsed the Chunk Header, read the Chunk Body, note that we just read the Chunk Body has not parsed the Chunk Body in accordance with the AMF format, for the logical processing of the Chunk Body part, the following will be a detailed source code introduction, but now we have parsed to a connection sent to the ChunkStream, then we can go back to the main process analysis.
We just said that after the handshake is completed, and we have parsed the ChunkStream information, then we will deal with the corresponding process flow according to the typeId of ChunkStream and the AMF data of Chunk Body. The specific idea can be understood like this. Client A sends xxxCmd commands, and RTMP servers parse xxxCmd commands according to typeId and AMF information, and give the corresponding command response.
The handleCmdMsg in the above code block is also the essence of the code for the RTMP server to deal with client commands. You can see that livego supports AMF3 and AMF0, and the difference between AMF3 and AMF0, which has been introduced above, and the following code comments are written more clearly, and then parse the Chunk Body data in AMF format, and the parsed results are also stored in Slice format.
After parsing typeId and AMF, the next step is to deal with each command naturally.
Next is the processing for each client command.
3.2.2.2 connection
Connection (Connect) command processing: during the connection process, the client and server will confirm the window size, transmission block size and bandwidth size. The connection process is described in detail in the original RTMP protocol, as shown in the following figure:
Similarly, we use WireShark packet capture analysis here:
From grabbing the package, we can see that the connection process is completed with only three packages:
Package 22: the client tells the server that I want to set the chunk size to 4096
Package 24: the client tells the server that I want to connect to an application called "live"
Packet 26: the server responds to the client's connection request, determines the window size, bandwidth size and chunk size, and returns "_ result" to indicate that the response is successful. This is all done through a TCP package.
So how do the client and server know the meaning of these packages? These are the rules laid down by the RTMP protocol specification, which we can understand by reading the specification and, of course, through wrieshark to help us parse quickly. The following is the detailed parsing of package 22. Let's focus on the parsing information of RTMP protocol.
As you can see from the figure, RTMP Header contains Format information, Chunk Stream ID information, Timestamp information, Body size information, Message Type ID information and Messgae Stream ID information. The hexadecimal value of Type ID is 0x01, which means Set Chunk Size, and belongs to the protocol control message (Protocol Control Messages).
Section 5.4 of the RTMP protocol specification stipulates that for protocol control messages, Chunk Stream ID must be set to 2 and message Stream ID must be set to 0, and the timestamp must be ignored directly. The information parsed from the WireShark grab packet shows that package 22 is indeed in line with the RTMP specification.
Now let's take a look at the detailed analysis of package 24.
Package 24 is also issued by the client, and you can see that it sets Message Stream ID to 0 message Type ID to 0x14 (that is, decimal 20), meaning AMF0 command. AMF0 belongs to RTMP command message (RTMP Command Messages). The RTMP protocol specification does not specify the Chunk Stream ID that must be used in the connection process, because what really works is Message Type ID, and the server responds accordingly according to Message Type ID. The AMF0 command sent during the connection process carries Object data and tells the server the name of the application and playback address to connect to.
The following code is the process by which livego handles client requests for a connection.
After receiving the request from the client to connect to the application, the server needs to respond to the client, that is, the contents of package 26 captured by WireShark. The details are shown in the following figure. You can see that the server has done several things in a single package.
We can learn more about this process by combining the livego source code.
3.2.2.3 createStream
After the connection is complete, you can create the flow. The process of creating a flow is relatively simple and can be implemented with only two packages, as follows:
Package 32 is the createStream request initiated by the client, and package 34 is the response of the server. The following is the source code for livego to handle the client connection request.
3.2.2.4 push
After the stream is created, you can start to push or pull the stream. Section 7.3.1 of the RTMP protocol specification also shows the push diagram, as shown in the following figure. The process of connecting and creating a stream has been described in detail above, so let's focus on the process of publishing content (Publishing Content).
Before using livego to push, you need to obtain the channelkey of the push. We can get the channelKey with the channel "movie" with the following command. The data field value of Content in the response content is the channelKey required for push.
$curl http://localhost:8090/control/get?room=movie StatusCode: 200StatusDescription: OKContent: {"status": 200, "data": "rfBd56ti2SMtYvSgD5xAV0YU99zampta7Z7S575K LkIZ9PYk"} RawContent: HTTP/1.1 200 OKContent-Length: 72 Content-Type: application/json Date: Tue 09 Feb 2021 09:19:34 GMT {"status": 200," data ":" rfBd56ti2SMtYvSgD5xAV0YU99zampta7Z7S575K LkIZ9PYk "} Forms: {} Headers: {[Content-Length, 72], [Content-Type, application/json], [Date, Tue] 09 Feb 2021 09:19:34 GMT]} Images: {} InputFields: {} Links: {} ParsedHtml: mshtml.HTMLDocumentClassRawContentLength: 72
Use OBS to push the stream to the livego server to apply the movie channel named live, and the push address is rtmp://localhost:1935/live/rfBd56ti2SMtYvSgD5xAV0YU99zampta7Z7S575KLkIZ9PYk. Again, let's take a look at the contents of WireShark's bag grab first.
At the beginning of push, the client initiates a publish request, that is, the content of package 36. The request needs to be accompanied by the channel name, which is "rfBd56ti2SMtYvSgD5xAV0YU99zampta7Z7S575KLkIZ9PYk" in this package.
The server will first detect the existence of the channel name and check whether the push name is in use, and reject the push request from the client if it does not exist or is in use. Since we have generated the channel name before push, the client can use it legally, so the server responds "NetStream.Publish.Start" in package 38, which tells the client to start push. Before pushing audio / video data, the client needs to send the audio / video metadata to the server, that is, what package No. 40 does. We can take a look at the details of the package. As can be seen from the following figure, there is a lot of metadata information, including video resolution, frame rate, audio sampling rate and audio channel and other key information.
After telling the server about the audio and video metadata, the client can start sending valid audio and video data, and the server will receive the data until the client issues FCUnpublish and deleteStream commands. The main logic of the TransStart () method of stream.go is to receive the audio and video data from the push client, then cache the latest data packet locally, and finally send the audio and video data to each pull end. The VirReader.Read () method in rtmp.go is mainly used to read the push customer's single audio and video data. The relevant code and comments are shown below.
Part of the source code analysis of the header information of the attached media.
Analyze the audio head
Parsing video header
3.2.2.5 pull
With the continuous push of the push client, the pull client can continuously pull audio and video data through the server. The pull process is described in detail in Section 7.2.2.1 of the RTMP protocol specification. Among them, the process of shaking hands, connecting, and creating flows has been described earlier, so let's just focus on the process of play commands.
Similarly, let's use WireShark to grab the bag for analysis. The client told the server through packet 640 that I wanted to play a channel called "movie".
Why is it called "movie" instead of "rfBd56ti2SMtYvSgD5xAV0YU99zampta7Z7S575KLkIZ9PYk" used in push? in fact, these two point to the same channel, but one is used for push and the other is used for pull. We can confirm this from the source code of livego.
After receiving the play request from the pull client, the server will respond to "NetStream.Play.Reset", "NetStream.Play.Start", "NetStream.Play.PublishNotify" and audio and video metadata. After this work is done, you can continue to send audio and video data to the pull client. We can deepen our understanding of this process through the livego source code.
The push data is read through chan and then sent to the pull client.
So far, this is the main flow of the whole RTMP. This does not involve specific transport protocols such as FLV,HLS or source code instructions for format conversion. That is to say, how the RTMP server receives the audio and video packets from the push client will be distributed to the pull client intact without additional processing, but now all the major cloud vendors support transport protocols such as http-flv,hls. It also supports recording, playback and VOD of audio and video, and this livego is also supported.
Because of the space limitation, there will be no more introduction here. Later, we will have the opportunity to learn and share livego's handling of this logic separately.
IV. Outlook
At present, LVB based on RTMP protocol is the benchmark protocol of domestic LVB, and it is also a compatible LVB protocol for all major cloud vendors. Its excellent features such as multiplexing and subcontracting are also an important reason for major manufacturers to choose it. On this basis, it is also because it is an application layer protocol. Large cloud vendors, such as Tencent, Ali, and Sound Network, will also modify the source code of their protocols, such as mixing multiple audio and video streams, single-channel recording and other functions.
However, RTMP also has its own shortcomings. High delay is one of the biggest problems of RTMP. In the actual production process, even in a relatively healthy network environment, the delay of RTMP will be 30.08s, which is quite different from the theoretical delay value of 1mm 3s given by major cloud manufacturers. So what are the problems caused by delay? We can imagine some of the following scenarios:
Online education, students ask questions, teachers talk about the next knowledge point, only to see the students' last question.
E-commerce live broadcast, asking for baby information, the anchor "ignored".
After the reward, I couldn't hear the anchor's oral thanks for a long time.
When other people's shouts know that the goal has been scored, do you still watch it live?
Especially in the environment where live broadcasting has formed an industry chain, many anchors regard it as a profession, and many anchors use it to carry out live broadcasting under the same network of the company. When the bandwidth of the exit of the company network is limited, the delay of RTMP and FLV format will be more serious. The live broadcast with high delay not only affects the real-time interaction between users and anchors, but also hinders the landing of some special live broadcast scenarios, such as live streaming with goods. Live education and so on.
The following are general solutions for using the RTMP protocol:
According to the actual network situation and some push settings, such as key frame interval, push bit rate, etc., the delay is generally about 8 seconds, and the delay mainly comes from two major aspects:
CDN link delay, which is divided into two parts, one is the network transmission delay. There are four segments of network transmission within CDN. Assuming that the delay caused by each network transmission is 20ms, then these four delays are 100ms. In addition, using RTMP frames as the transmission unit means that each node needs to receive one frame before it can start the process of forwarding to the downstream. In order to improve concurrency performance, CDN will optimize the packet sending strategy, which will increase some delays. In the scenario of network jitter, the delay is even more uncontrollable. Under the reliable transport protocol, once there is network jitter, the subsequent transmission process will be blocked and need to wait for the retransmission of the sequence packet.
The player buffer, which is the main source of delay. The public network environment is very different. Any network jitter in push, CDN transmission, playback and reception will affect the player. In order to resist the jitter of the front link, the general strategy of the player is to retain the media buffer of about 6s.
At this point, the study on "what is the RTMP protocol" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.