In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "what is the TARS of C++ server". In the daily operation, I believe many people have doubts about what the TARS of C++ server is. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts of "what is the TARS of C++ server?" Next, please follow the editor to study!
TARS is a micro-service development framework that has been used by Tencent for ten years and currently supports C++, Java, PHP, Node.js and Go languages. The open source project provides users with a complete set of micro-service platform PaaS solutions related to development, operation and maintenance, and testing, which helps a product or service to develop, deploy, test and launch quickly. At present, the framework is used in the core business of Tencent, and the scale of service nodes deployed and run based on this framework is up to hundreds of thousands.
The communication model of TARS includes client and server. The communication between client and server is mainly carried out by using RPC. This series of articles is divided into two parts, which parse the source code of the RPC call part.
When you use TARS to build a RPC server, TARS will help you generate a XXXServer class that inherits from the Application class, declares the variable XXXServer g_app, and calls functions:
You can start the RPC service of TARS. Before we start to analyze the server-side code of TARS, let's introduce a few important classes to give you a general idea. Application as mentioned earlier, a server is an Application,Application that helps users read configuration files, initialize agents according to configuration files (if this server needs to call other services, then it needs to initialize agents) and services, create and start network threads and business threads. TC_EpollServerTC_EpollServer is the real server. If you compare Application to a fan, then TC_EpollServer is the motor. TC_EpollServer is in charge of two major modules, the network module and the business module, which are the two classes to be introduced below. NetThread represents the network module, including TC_Epoller as IO reuse, TC_Socket establishes socket connections, and ConnectionList records many socket connections to clients. Any network-related data sending and receiving is related to NetThread. In the configuration file, the number of NetThread configured with the netthread under / tars/application/server HandleGroup and Handle represent the business module. Handle is a thread that executes the PRC service, and the HandleGroup composed of many Handle is a set of business threads of the same RPC service. The business thread is responsible for calling the user-defined service code and putting the processing result in the sending cache to wait for the network module to send. The following article will explain in detail how the business thread calls the user-defined code. A simple C++ reflection is used here, which has not been mentioned in a lot of materials. In the configuration file, configure the number of Handle (business threads) in a HandleGroup with the threads under / tars/application/server/xxxAdapter. BindAdapter represents a RPC service entity. The xxxAdapter under / tars/application/server in the configuration file is the configuration of BindAdapter. A BindAdapter represents a service entity. You can see what the role of BindAdapter is by looking at its configuration. It represents a listening socket outside of a RPC service. It also declares the maximum number of connections, the size of the receiving queue, the number of business threads, the RPC service name, the protocol used, and so on. BindAdapter itself can be regarded as an instance of a service, which can establish a real listening socket and provide external services, which is related to the network module NetThread and the business module HandleGroup. For example, the first thread of multiple NetThread is responsible for listening on the listen socket of the BindAdapter, and the listen socket connected to the BindAdapter randomly selects one of the multiple NetThread and places the connection into the ConnectionList of the selected NetThread. BindAdapter is usually associated with a set of HandleGroup in which the business thread executes the service corresponding to the BindAdapter. It can be seen that BindAdapter is related to the network module as well as the business module. Now, after introducing these classes, take a look at the relationship between them through the class diagram:
The network module on the left side of the server TC_EpollServer management class diagram and the business module on the right are responsible for establishing and managing the network relationship between the server and the server, while the latter is responsible for executing the business code of the server. The two constitute a whole through BindAdapter and provide RPC services to the outside.
Like the client, the server also needs to initialize to build the whole mentioned above. According to the above introduction, initialization can be divided into two modules-the initialization of the network module and the initialization of the business module. All the initialization code is in void main () and void waitForQuit () of Application. Initialization includes shielding pipe signals and reading configuration files, which will be ignored. We will mainly look at how to build the network part through epoll and the establishment of listen socket, and how to set up the business thread group to build the business part. Initialization of TC_EpollServer before initializing network module and service module, TC_EpollServer needs to be initialized. The main code is as follows:
The static member variables in the ServerConfig are populated in initializeServer () and left to be used when needed. You can see that there is _ epollServer = new TC_EpollServer (iNetThreadNum), the server TC_EpollServer is created, and the network thread NetThread is also created:
Since then, an AdminAdapter has actually been established, but unlike the normal RPC service BindAdapter, it will not be introduced here.
Well, after TC_EpollServer has been built, how to arrange the left (network module) and right (business module) protection for him?
Network module initialization before explaining the network module, take a closer look at the relevant class diagram of the network module:
Let's take a look at which code in Application is related to the initialization of the network module:
The initialization of the network part is inseparable from the establishment of the listening port (socket,bind,listen) of each RPC service, the connection of the receiving client (accept), the establishment of epoll and so on. So when and where are these functions called? The approximate process is shown in the following figure:
1. To create the listen socket of the service entity, first in Application::main (), call:
Establish a service entity BindAdapter in Application::bindAdapter (), and determine the number of service entity BindAdapter and the configuration of different service entities by reading the xxxAdapter under / tars/application/server in the configuration file, and then call:
To determine the listen socket of the service entity. As you can see, in TC_EpollServer::bind ():
The first network thread in the network thread group created during the initialization of the above TC_EpollServer is responsible for creating and listening to the listen socket of the service entity, so as to avoid the shock effect of multi-thread listening on the same fd. As you can see, the next step is to call NetThread::bind (BindAdapterPtr & lsPtr), which is responsible for doing some preparatory work. The actual creation of socket is the NetThread::bind (const TC_Endpoint & ep, TC_Socket & s) executed in NetThread::bind (BindAdapterPtr & lsPtr):
At this point, the listen socket of the service entity BindAdapter has been created. After the code is returned to NetThread::bind (BindAdapterPtr & lsPtr), you can also see that NetThread records the BindAdapter that fd is responsible for listening to:
The following figure is a summary of the process of creating a listen socket for a service entity
two。 Create the epoll code back into Application::main () by executing:
To let TC_EpollServer establish an epoll in the network thread it is in charge of:
The code comes to NetThread::createEpoll (uint32_t iIndex). This function can be used as the initialization function of the network thread NetThread. In the function, the memory pool of the network thread is established, the epoll is created, and the listen socket created above is added to the epoll. Of course, only the first network thread has listen socket, and the connection management linked list ConnectionList _ list is initialized. See the following figure for a summary of this process:
3. Starting a network thread because NetThread is a thread, you need to execute its start () function to start the thread. This work is not done in Application::main (), but in Application::waitForQuit () in Application::waitForShutdown (). Follow the flow chart below to see the code, and you will see clearly:
Initialization of business module
Similarly, like the network module, take a careful look at the relevant class diagrams of the business module before explaining the business module:
In the initialization of the business module, we need to clarify two problems: how does the business module establish contact with the XXXServantImp implemented by the user, so that when the request arrives, the Handle can call the user-defined RPC method? When and where is the business thread started and how to wait for the request to arrive?
Take a look at which code in Application is related to the initialization of business modules:
The first problem mentioned earlier is solved in bindAdapter (adapters) and initialize (), and the rest of the code implements the creation and startup of the handle business thread group. 1. How to associate a BindAdapter with a user-defined method? Let's take a look at the following code flow chart:
How to enable a business thread to call user-defined code? ServantHelperManager is introduced here. First, a simple spoiler is introduced. Through ServantHelperManager as a bridge, the business thread can index to the service ID through the ID of BindAdapter, and then through the service ID index to the generator of the user-defined XXXServantImp class. With the generator, the business thread can generate the XXXServantImp class and call the methods inside. Next step by step analysis. You can see the following code in the Application::bindAdapter () call to Application::main ():
For example, adapterNamei is MyDemo.StringServer.StringServantAdapter and servant is MyDemo.StringServer.StringServantObj, which is read in the configuration file, the former is the ID of BindAdapter and the latter is the service ID. In ServantHelperManager:: setAdapterServant (), simply execute:
These two member variables are simply:
Here is just to make a mapping record, the ID of the BindAdapter can be indexed to the ID of the service, and the XXXServantImp class implemented by the user can be obtained by using the simple C++ reflection through the ID of the service, thus the method implemented by the user can be obtained.
How to implement reflection from service ID to class? Also need the help of ServantHelperManager. In Application::main (), after executing Application::bindAdapter (), initialize () is executed, which is a pure virtual function that actually executes a function of the derived class XXXServer, similar to:
The code will eventually execute ServantHelperManager:: addServant ():
Where the parameter const string& id is the service ID, for example, the MyDemo.StringServer.StringServantObj,T above is the XXXServantImp class implemented by the user. The _ servant_creatorid = new ServantCreation () of the above code is the key to the function, and _ servant_creator is map, which can be indexed to ServantHelperCreationPtr by serving ID, but what is ServantHelperCreationPtr? Is a class builder that helps us generate XXXServantImp instances. This is the simple C++ reflection:
The above is a simple reflection technology to generate the corresponding XXXServantImp class through the service ID. The business thread in the business thread group only needs to obtain the ID of the BindAdapter of the business to be executed and can obtain the service ID through ServantHelperManager. With the service ID, you can obtain the generator of the XXXServantImp class to generate the RPC method defined by the user in the XXXServantImp class execution. Now look at the figure (2-8) to get a general picture of the whole process.
Startup of 2.Handle business thread
The rest is the creation of HandleGroup, which is associated with BindAdapter, and also needs to be bound to TC_EpollServer, and then creates / starts the Handle business thread under HandleGroup. The process of starting Handle involves the acquisition service class generator mentioned above in "associating BindAdapter with user-defined methods." Let's take a look at the general code flow chart:
There are two parts here. The first part executes the following code in Application::main ():
Iterate through each BindAdapter defined in the configuration file (for example, MyDemo.StringServer.StringServantAdapter) and set the business thread group HandleGroup for it so that all threads in the thread group can execute the RPC method corresponding to that BindAdapter. The tracking code is as follows:
Note that ServantHandle is a derived class of Handle, the business processing thread class, followed by:
The code that actually creates the business thread group HandleGroup and the threads within the group, and associates the thread group with BindAdapter,TC_EpollServer is in TC_EpollServer:: setHandleGroup ():
Here, you can see the creation of the business thread group: HandleGroupPtr hg = new HandleGroup (); the creation of the business thread: HandlePtr handle = new T () (T is ServantHandle); and the establishment of relationships, such as the correlation between BindAdapter and HandleGroup: it- > second- > adaptersadapter- > getName () = adapter and adapter- > _ handleGroup = it- > second. After executing the above code, you can get the following class diagram:
Here, we briefly review the flow of the above code through the function flow chart, and the main content is in TC_EpollServer:: setHandleGroup ():
As the function exits layer after layer, the code returns to Application::main (), and then executes:
In TC\ _ EpollServer::startHandle (), iterate through all the business thread groups in the business module HandleGroup controlled by TC\ _ EpollServer, and traverse each Handle in the group, and execute its start () method to start the thread:
Because Handle inherits from TC_Thread, in executing Handle::start (), the virtual function Handle::run () is executed. In Handle::run (), two functions are mainly executed, one is ServantHandle::initialize (), and the other is Handle::handleImp ():
The main function of ServantHandle::initialize () is to obtain the RPC method implemented by the user, and its implementation principle is the same as mentioned above (the first small point in "initialization of 2.2.3 Business Module", "Associating BindAdapter with user-defined methods"), with the help of the ID number of the associated BindAdapter and ServantHelpManager, to find the generator of the XXXServantImp class implemented by the user and generate an instance of the XXXServantImp class. Make this instance and the service name into a pair variable and put it in map ServantHandle:: _ servants. When the business thread Handle needs to execute a user-defined method, look for it in map ServantHandle:: _ servants:
The main function of Handle::handleImp () is to make the business thread block waiting on the condition variable. Here, you can see the _ handleGroup- > monitor.timedWait (_ iWaitTime) function, blocking waiting on the condition variable:
Handle threads use condition variables to make all business threads block and wait to be awakened, because this chapter introduces initialization, so the code interpretation ends here. Later, we will explain in detail how to find and execute business through map ServantHandle:: _ servants after the business thread Handle in the server is awakened. Now review the above code flow through the function flowchart:
After initialization, the server enters the working state, and the server worker thread is divided into two types. As described earlier, the network thread and the business thread are responsible for receiving the client connection and sending and receiving data, while the business thread is only concerned with executing the user-defined PRC method. Both threads have already executed start () startup at the time of initialization.
Most servers follow the process of accept ()-> read ()-> write ()-> close (). The approximate workflow flow chart is as follows:
The server of TARS is no exception. The decision logic is implemented by Epoll IO reuse model, and each network thread NetThread has a TC_Epoller to collect, listen and distribute events. As mentioned earlier, only the first network thread listens for the connection, and after accepting the new connection, a Connection instance is constructed and the network thread that handles the connection is selected. After the request is read in, it is temporarily stored in the receiving queue and notified to the business thread for processing. Here, the business thread finally enters the stage, and after processing the request, the result is placed in the sending queue. When there is data in the sending queue, it is natural to notify the network thread to send it, and the network thread that receives the notification will send the response to the client. The workflow of the TARS server is roughly like this. As shown in the figure above, there is not much difference in the ordinary server workflow. The following four parts will introduce the work of the server one by one: accepting client connections, reading RPC requests, processing RPC requests, and sending RPC responses. Accept client connection the discussion server accepts the request, obviously starting with the NetThread::run () of the network thread (and the first network thread of the network thread group). When the TC_Epoller is created and the listening fd is put into the TC_Epoller, what is executed is:
So when you return from epoll_wait (), the union epoll_data in epoll_event will be (ET_LISTEN | listen socket'fd), get 32 bits high from it, which is ET_LISTEN, and then execute the branch of case ET_LISTEN in the following switch
The whole function flow of ret = accept (ev.data.u32) is shown in the following figure (ev.data.u32 is the fd that listens to the socket corresponding to the activated BindAdapter):
Before explaining, review the class diagrams related to network threads, and get a general impression of accept through the illustration:
All right, follow the figure (2-14). Now let's start with NetThread::accept (int fd) of NetThread::run (). When the 1.accept acquires client socket and enters NetThread::accept (int fd), you can see that the code is executed:
Through TC_Socket::accept (), call the system function accept () to accept the socket connection from the client's painstaking three-way handshake, then print and check the client's IP and port, and analyze whether the corresponding BindAdapter is overloaded, and close the connection. Then set up the client socket:
At this point, the first step in the corresponding figure (2-16), accepting a client connection (the process is shown in the following figure), has been completed.
two。 Create a Connection for the client socket. The next step is to create a Connection for the new client socket. In NetThread::accept (int fd), the code to create the Connection is as follows:
The parameters in the constructor are, in turn, the BindAdapter pointer corresponding to the new client, the fd of the listen socket corresponding to BindAdapter, the timeout, the fd of the client socket, the ip of the client and the port. In the constructor of Connection, its TC_Socket is also associated through fd:
Then after the TC_Socket is associated, the client-side socket can be operated through the Connection instance. At this point, the second step in the corresponding figure (2-16), creating a Connection for the client socket, is complete (the process is shown in the following figure).
3. Select a network thread for Connection. Finally, select a network thread for this Connection, add it to the corresponding ConnectionList of the network thread, and execute it in NetThread::accept (int fd):
The code for TC_EpollServer::addConnection () is as follows:
As you can see, first select the network thread for Connection* cPtr. In the flowchart, the selected network thread is called Chosen_NetThread. The function to select the network thread is TC_EpollServer::getNetThreadOfFd (int fd), which is obtained according to the fd of the client socket. The specific code is as follows:
Then call the NetThread::addTcpConnection () method of the selected thread (or NetThread::addUdpConnection (), here only introduces the method of TCP), add Connection to the ConnectionList of the selected network thread, and finally execute _ epoller.add (cPtr- > getfd (), cPtr- > getId (), EPOLLIN | EPOLLOUT) to add the fd of the client socket to the TC_Epoller of the network thread, and let the network thread be responsible for sending and receiving the data of the client. At this point, the third step of the corresponding figure (28) is completed (the specific process is shown in the following figure).
Receive RPC request discussion server receives RPC request, also starts with the NetThread::run () of the network thread, the above is entering the case ET_LISTEN branch in switch to accept the client connection, so now it is entering the case ET_NET branch, why the case ET_NET branch? Because as mentioned above, the fd of the client socket is added to the TC_Epoller to monitor its reading and writing, using _ epoller.add (cPtr- > getfd (), cPtr- > getId (), EPOLLIN | EPOLLOUT). The second parameter passed to the function is 32-bit shaping cPtr- > getId (), and the second parameter of the function must be a 64-bit integer. Therefore, this parameter will be a 64-bit integer with a high 32-bit value of 0 and a low 32-bit cPtr- > getId (). The second parameter is returned to the user as a 64-bit union epoll_data_t data in the epoll_event structure of the active event when the registered event causes epoll_wait () to exit. So, look at the following NetThread::run () code:
H in the code is the high 32-bit of the 64-bit federation epoll_data_t data. According to the above analysis, if the client socket exits epoll_wait () because it receives data, the high 32-bit of epoll_data_t data is 0, and the low 32-bit is cPtr- > getId (), so h will be 0. And ET_NET is 0, so if the client-side socket has data coming, it will execute the case ET_NET branch. Let's take a look at the flow chart of the function that executes the case ET_NET branch.
1. Get the activated connection Connection receives the RPC request and goes to NetThread::processNet (). The server needs to know which client socket is activated, so execute it in NetThread::processNet ():
As mentioned above, the high 32-bit epoll_data_t data is 0 and the low 32-bit is cPtr- > getId (), so after you get the uid, you can return the Connection of the RPC request that needs to be read from the ConnectionList through NetThread::getConnectionPtr (). Then simply check the acquired Connection and see if the epoll_event::events is EPOLLERR or EPOLLHUP (the specific process is shown in the figure below).
two。 Receive the client request, put it in the thread safety queue, and then need to receive the request data from the client. Data reception means that epoll_event::events is EPOLLIN. See the following code, mainly NetThread::recvBuffer () reads RPC request data, and Connection:: insertRecvQueue () wakes up the business thread to send data.
First, take a look at NetThread::recvBuffer (). First, the server will create a thread-safe queue to hold the received data recv_queue::queue_type vRecvData, and then call NetThread::recvBuffer (cPtr, vRecvData) with the just obtained Connection cPtr and recv_queue::queue_type vRecvData as parameters. NetThread::recvBuffer () further calls the Connection::recv () function:
Connection::recv () performs different receiving methods according to different transport layer protocols (such as UDP transport, lfd==-1), for example, TCP executes:
According to the data receiving situation, such as receiving FIN section, errno==EAGAIN and so on, different actions are performed. If a real request packet is received, the received data is placed in string Connection::_recvbuffer and Connection:: parseProtocol () is called. In Connection:: parseProtocol (), the protocol parsing function will be called back to verify the received data. After the verification is passed, the element tagRecvData* recv in the thread safety queue will be constructed and put into the thread safety queue:
At this point, the RPC request data has been fully fetched and placed in the thread-safe queue (as shown in the following figure).
3. Thread safety queue is not empty, wake up business thread to send
When the code runs at this point, there is finally RPC request packet data in the thread-safe queue. You can wake up the business thread Handle for processing, and the code goes back to NetThread::processNet (). As long as the thread-safety queue is not empty, execute Connection:: insertRecvQueue ():
In Connection:: insertRecvQueue (), the overload of BindAdapter is judged first, which can be divided into three cases: non-overload, semi-overload and full overload. If a full overload discards all RPC request data in the thread-safe queue, otherwise BindAdapter::insertRecvQueue () will be executed. In BindAdapter::insertRecvQueue (), the code has two main actions. The first is to put the acquired RPC request packet into the receiving queue of BindAdapter-- recv_queue _ rbuffer:
The second is to wake up the HandleGroup thread group waiting for the condition variable:
Now, after receiving the RPC request data, the network thread on the server finally wakes up the business thread (the specific process is shown in the following figure). Next, it's the business module's turn to take a look at how to handle the RPC request.
After processing the RPC request and receiving the request data in the previous article, the remote echo of the wake-up business thread group HandleGroup (that is, _ handleGroup- > monitor.notify ()) is mentioned in the second small point of "initialization of 2.2.3 business module", "start of Handle business thread". In the Handle::handleImp () function, _ handleGroup- > monitor.timedWait (_ iWaitTime). Through the condition variable, the business threads in the business thread group HandleGroup block together and wait for the network thread to wake it up. Now that you have finally initiated a notification for the condition variable, what will happen to the request? Here, you need to review Section 2.2.3 to learn what is inside the ServantHandle::_servants. All right, processing the RPC request is divided into three steps: construct the request context, call the method implemented by the user to process the request, push the response packet to the thread-safe queue and notify the network thread. The specific function flow is shown in the following figure. Now analyze it further:
1. Get request data Construction request context when the business thread is awakened from the condition variable, it gets the request data from its responsible BindAdapter: adapter- > waitForRecvQueue (recv, 0). In BindAdapter::waitForRecvQueue (), the data is obtained from the thread-safe queue recv_queue BindAdapter::_ rbuffer:
Remember where the data was pushed into the thread-safe queue? Yes, in the third point of "receiving RPC requests in 2.3.2", "Thread Safety queue is not empty, wake up business threads to send". Next, call ServantHandle::handle () to process the received RPC request data. The first step in processing is shown in the subtitle of this section-- construct the request context using ServantHandle::createCurrent ():
In ServantHandle::createCurrent (), first new the TarsCurrent instance, and then call its initialize () method. In TarsCurrent::initialize (const TC_EpollServer::tagRecvData & stRecvData, int64_t beginTime), put the contents of the RPC request packet into the request context TarsCurrentPtr current, and then you only need to pay attention to the request context. In addition, you can pay a little attention to the fact that using the TARS protocol will use TarsCurrent::initialize (const string & sRecvBuffer) to put the contents of the request packet into the request context, otherwise the content will be copied directly using the memcpy () system call. Here is a brief summary of the flow of this section:
two。 Processing the request (TARS protocol only) when the request context is obtained, it needs to be processed.
This RPC framework supports TARS protocol and non-TARS protocol, the following will only introduce the treatment of TARS protocol, for non-TARS protocol, the analysis process is about the same, readers interested in non-TARS protocol can compare to analyze non-TARS protocol. Before the introduction, let's take a look at the service-related inheritance system. Let's not confuse these three classes:
All right, now focus on the ServantHandle::handleTarsProtocol (const TarsCurrentPtr roomt) function. Paste the code first:
When entering the function, the request context is preprocessed, such as set call validity check, staining processing, and so on. Then, the service object is obtained according to the service name in the context: map::iterator sit = _ servants.find (current- > getServantName ()). _ servants is given content in the second dot of "initialization of 2.2.3 business module", "startup of Handle business thread". Its key is the service ID (or service name), and value is the service XXXServantImp instance pointer implemented by the user.
Then you can use the XXXServantImp instance pointer to execute the RPC request: ret = sit- > second- > dispatch (current, buffer). In Servant:: dispatch () (as shown in figure (2-26), because XXXServantImp inherits from XXXServant, and XXXServant inherits from Servant, it is actually a method to execute Servant), different protocols will be used for different processing. Here we only introduce the TARS protocol and call the XXXServant::onDispatch (tars::TarsCurrentPtr _ current, vector & _ sResponseBuffer) method:
The XXXServant class is generated when the Tars2Cpp is executed, and the corresponding pure virtual function is generated according to the user-defined tars file, as well as the onDispatch () method, whose actions are:
1. Find out the function corresponding to the request data in this service class
2. Decoding the function parameters in the request data
3. Execute the corresponding RPC method defined by the user in the XXXServantImp class
4. The result after the execution of the encoding function
5.return tars::TARSSERVERSUCCESS .
The above steps are described in accordance with the default server auto reply method. In practice, users can turn off the auto reply feature (e.g. current- > setResponse (false)) and send their own reply (e.g. servant- > async\ _ response\ _ XXXAsync (current, ret, rStr)). Now that the server has executed the RPC method, let's briefly summarize the contents of this section:
3. Push the response packet to a thread-safe queue and notify the network thread
After processing the RPC request and executing the RPC method, you need to send the result (buffer in the following code) back to the client:
Because the business and the network are independent, the network thread uses the condition variable to notify the business thread after receiving the request packet, but what is the way for the business thread to notify the network thread? As you can see, the network thread is blocked in the epoll, so you need to use epoll to notify the network thread. This time, take a look at the diagram summary, and then analyze the code:
In ServantHandle::handleTarsProtocol (), the final step is to send back the response packet. The process of sending back a packet is to encode the response information-- find out the network thread that received the request information, because we need to notify him to do the work-- put the response packet into the sending queue of that network thread-- and use the characteristics of epoll to wake up the network thread. Let's focus on NetThread::send ():
At this point, the business module in the server has completed its mission, and it is the work of the network module to send the response data to the client. Send a RPC response to get the request, and of course you need to reply. From the above, we know that the business module notifies the network thread through _ epoller.mod (_ notify.getfd (), H64 (ET_NOTIFY), EPOLLOUT). Coupled with the previous analysis of "2.3.1 accept client Please connect" and "2.3.2 receive RPC request" experience, we know that we must start with NetThread::run (). And enter the case ET_NOTIFY branch:
In NetThread::processPipe (), first take the response packet from the thread-safe queue: _ sBufQueue.dequeue (sendp, false), which echoes the third dot of "2.3.3 processing RPC requests", "push the response packet to the thread-safe queue and notify the network thread". Then get the uid of the Connection corresponding to the request information from the response information, and use uid to get Connection:Connection * cPtr = getConnectionPtr (sendp- > uid). Since Connection aggregates TC_Socket, the response data is sent back to the client via Connection, as shown in the following figure:
Here is a graphical summary of the working process of the server:
TARS can quickly build systems and automatically generate code while considering ease of use and high performance, helping developers and enterprises to quickly build their own stable and reliable distributed applications in the form of micro-services, so that developers can only focus on business logic and improve operational efficiency. Multilingual, agile research and development, high availability and efficient operation make TARS an enterprise-class product. At this point, the study of "what is the TARS of C++ server" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.