Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Comments on the Net module source code of trafficserver

2025-04-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

The startup process of Net:

Main.cc

Main ()-- > UnixNetProcessor::start ()

Startup of Net module

Int

UnixNetProcessor::start (int, size_t)

{

EventType etype = ET_NET

/ / allocate space to NetHandler instance

NetHandler_offset = eventProcessor.allocate (sizeof (NetHandler))

/ / allocate space to PollCont instance

PollCont_offset = eventProcessor.allocate (sizeof (PollCont))

/ / the event type corresponding to UnixNetProcessor is ET_NET, and if it is sslNetProcessor, the corresponding event type is ET_SSL

UpgradeEtype (etype)

/ / Thread that gets net from eventProcessor, which is done during initialization of the event module

N_netthreads = eventProcessor.n_threads_for_ type [etype]

/ / number of threads that obtained net from eventProcessor

Netthreads = eventProcessor.eventthread [etype]

/ / initialize all Net threads

For (int I = 0; I

< n_netthreads; ++i) { initialize_thread_for_net(netthreads[i]); #ifndef STANDALONE_IOCORE extern void initialize_thread_for_http_sessions(EThread *thread, int thread_index); initialize_thread_for_http_sessions(netthreads[i], i); #endif } RecData d; d.rec_int = 0; //设置网络链接数的阈值 change_net_connections_throttle(NULL, RECD_INT, d, NULL); //sock相关,很少使用,这里先不介绍 if (!netProcessor.socks_conf_stuff) { socks_conf_stuff = NEW(new socks_conf_struct); loadSocksConfiguration(socks_conf_stuff); if (!socks_conf_stuff->

Socks_needed & & socks_conf_stuff- > accept_enabled) {

Warning ("We can not have accept_enabled and socks_needed turned off"disabling Socks accept\ n")

Socks_conf_stuff- > accept_enabled = 0

} else {

Socks_conf_stuff = netProcessor.socks_conf_stuff

}

}

/ / display Net-related statistics on the page

# ifdef NON_MODULAR

Extern Action * register_ShowNet (Continuation * c, HTTPHdr * h)

If (etype = = ET_NET)

StatPagesManager.register_http ("net", register_ShowNet)

# endif

Return 1

}

Main ()-- > UnixNetProcessor::start ()-> initialize_thread_for_net ()

As the name implies, the function of this function is to initialize a thread for the network

Void

Initialize_thread_for_net (EThread * thread)

{

/ / create NetHandler and PollCont instances

/ / NetHandler: used to process all the time related to Net

/ / PollCont: is a Poll continuation (the design idea of ats), containing pointers to NetHandler and PollDescriptor

/ / describe the encapsulation structure of PollDescriptor:Poll

New ((ink_dummy_for_new *) get_NetHandler (thread)) NetHandler ()

New ((ink_dummy_for_new *) get_PollCont (thread)) PollCont (thread- > mutex, get_NetHandler (thread))

Get_NetHandler (thread)-> mutex = new_ProxyMutex ()

PollCont * pc = get_PollCont (thread)

PollDescriptor * pd = pc- > pollDescriptor

/ / call the NetHandler instance to start, and eventually call the NetHandler::mainNetEvent () function every second

Thread- > schedule_imm (get_NetHandler (thread))

# ifndef INACTIVITY_TIMEOUT

/ / create an InactivityCop instance. InactivityCop will determine whether each link vc can be closed at a fixed time (1 second), and then close it.

InactivityCop * inactivityCop = NEW (new InactivityCop (get_NetHandler (thread)-> mutex))

/ / check_inactivity () function of scheduled scheduling inactivityCop

Thread- > schedule_every (inactivityCop, HRTIME_SECONDS (1))

# endif

/ / register the signal processing function

Thread- > signal_hook = net_signal_hook_function

/ / create an EventIO instance and initialize it

Thread- > ep = (EventIO*) ats_malloc (sizeof (EventIO))

Thread- > ep- > type = EVENTIO_ASYNC_SIGNAL

# if HAVE_EVENTFD

/ / start the EventIO instance and register the read event using epoll (if you don't know epoll, take a look at it first)

Thread- > ep- > start (pd, thread- > evfd, 0, EVENTIO_READ)

# else

Thread- > ep- > start (pd, thread- > evpipe [0], 0, EVENTIO_READ)

# endif

}

Initialization of NetHandler

Main ()-- > UnixNetProcessor::start ()-> NetHandler::NetHandler ()

Set the handler of NetHandler to NetHandler::startNetEvent

NetHandler::NetHandler (): Continuation (NULL), trigger_event (0)

{

SET_HANDLER ((NetContHandler) & NetHandler::startNetEvent)

}

Set the handler of NetHandler to NetHandler::mainNetEvent, and schedule the function execution on a regular basis

Int

NetHandler::startNetEvent (int event, Event * e)

{

(void) event

SET_HANDLER ((NetContHandler) & NetHandler::mainNetEvent)

E-> schedule_every (NET_PERIOD)

Trigger_event = e

Return EVENT_CONT

}

Initialization of PollCont

Main ()-- > UnixNetProcessor::start ()-> PollCont::PollCont ()

PollCont::PollCont (ProxyMutex * m, NetHandler * nh, int pt): Continuation (m), net_handler (nh), poll_timeout (pt)

{

/ / create a PollDescriptor instance

PollDescriptor = NEW (new PollDescriptor)

/ / initialize PollDescriptor instance

PollDescriptor- > init ()

/ / set the handler of PollCont to PollCont::pollEvent

SET_HANDLER & PollCont::pollEvent)

}

Initialization of PollDescriptor

Main ()-- > UnixNetProcessor::start ()-- > PollCont::PollCont ()-> init ()

PollDescriptor * init ()

{

Result = 0

# if TS_USE_EPOLL

Nfds = 0

/ / create a file descriptor for epoll

Epoll_fd = epoll_create (POLL_DESCRIPTOR_SIZE)

Memset (ePoll_Triggered_Events, 0, sizeof (ePoll_Triggered_Events))

Memset (pfd, 0, sizeof (pfd))

# endif

.

Return this

}

Main ()-- > UnixNetProcessor::start ()-- > initialize_thread_for_net ()-> NetHandler::mainNetEvent ()

This function looks a bit long, let's talk about its function first: first, call epoll_wait () to wait for events, and then do different processing according to the type of events. Events are divided into EVENTIO_READWRITE_VC (read and write events), EVENTIO_DNS_CONNECTION (CONNECT events of DNS), and EVENTIO_ASYNC_SIGNAL (synchronization signal events). It is said that the reception and response of normal HTTP requests belong to the sending process of EVENTIO_READWRITE_VC,DNS requests. When the connect () function is called to send the DNS request, epoll_ctl () will be called to register the response event. This is EVENTIO_DNS_CONNECTION, and we don't care about it yet.

EVENTIO_ASYNC_SIGNAL .

Finally, we traverse the readable and writable queues of Handler, respectively, and call read and write to read and write, and then notify the upper layer.

Int

NetHandler::mainNetEvent (int event, Event * e)

{

Ink_assert (trigger_event = = e & & (event = = EVENT_INTERVAL | | event = = EVENT_POLL))

(void) event

(void) e

EventIO * epd = NULL

Int poll_timeout = net_config_poll_timeout

/ / count information + +

NET_INCREMENT_DYN_STAT (net_handler_run_stat)

/ / handle the time UnixNetVConnection on NetHandler's readable and writable queues, where you can think of it as doing nothing

Process_enabled_list (this)

If (likely (! read_ready_list.empty () | |! write_ready_list.empty () | |! read_enable_list.empty () | |! write_enable_list.empty ()

Poll_timeout = 0

Else

Poll_timeout = net_config_poll_timeout

PollDescriptor * pd = get_PollDescriptor (trigger_event- > ethread)

UnixNetVConnection * vc = NULL

# if TS_USE_EPOLL

/ / call the epoll event

Pd- > result = epoll_wait (pd- > epoll_fd, pd- > ePoll_Triggered_Events, POLL_DESCRIPTOR_SIZE, poll_timeout)

NetDebug ("iocore_net_main_poll", "[NetHandler::mainNetEvent] epoll_wait (% dmaine% d), result=%d", pd- > epoll_fd,poll_timeout,pd- > result)

.

/ / handle all events

Vc = NULL

For (int x = 0; x

< pd->

Result; Xerox +) {

Epd = (EventIO*) get_ev_data (pd,x)

/ / EVENTIO_READWRITE_VC event handling: if it is a read event, add it to NetHandler's readable list read_ready_list, if it is a write event, join NetHandler's readable list write_ready_list.

If (epd- > type = = EVENTIO_READWRITE_VC) {

Vc = epd- > data.vc

If (get_ev_events (pd,x) & (EVENTIO_READ | EVENTIO_ERROR)) {

Vc- > read.triggered = 1

If (! read_ready_list.in (vc))

Read_ready_list.enqueue (vc)

Else if (get_ev_events (pd,x) & EVENTIO_ERROR) {

/ / check for unhandled epoll events that should be handled

Debug ("iocore_net_main", "Unhandled epoll event on read: 0xx read.enabled=%d closed=%d read.netready_queue=%d"

Get_ev_events (pd,x), vc- > read.enabled, vc- > closed, read_ready_list.in (vc))

}

}

Vc = epd- > data.vc

If (get_ev_events (pd,x) & (EVENTIO_WRITE | EVENTIO_ERROR)) {

Vc- > write.triggered = 1

If (! write_ready_list.in (vc))

Write_ready_list.enqueue (vc)

Else if (get_ev_events (pd,x) & EVENTIO_ERROR) {

Debug ("iocore_net_main"

"Unhandled epoll event on write: 0xx write.enabled=%d closed=%d write.netready_queue=%d"

Get_ev_events (pd,x), vc- > write.enabled, vc- > closed, write_ready_list.in (vc))

}

} else if (! get_ev_events (pd,x) & EVENTIO_ERROR) {

Debug ("iocore_net_main", "Unhandled epoll event: 0xx", get_ev_events (pd,x))

}

/ / handling of EVENTIO_DNS_CONNECTION events: join DNSHandler's triggered queue

} else if (epd- > type = = EVENTIO_DNS_CONNECTION) {

If (epd- > data.dnscon! = NULL) {

Epd- > data.dnscon- > trigger ()

# if defined (USE_EDGE_TRIGGER)

Epd- > refresh (EVENTIO_READ)

# endif

}

} else if (epd- > type = = EVENTIO_ASYNC_SIGNAL)

Net_signal_hook_callback (trigger_event- > ethread)

Ev_next_event (pd,x)

}

Pd- > result = 0

# if defined (USE_EDGE_TRIGGER)

/ / traverse the vc in the readable queue of Handler, call net_read_io to process each vc separately, and the function of net_read_io is to call read to receive data, and then notify the upper layer (HttpSM)

While ((vc = read_ready_list.dequeue () {

If (vc- > closed)

Close_UnixNetVConnection (vc, trigger_event- > ethread)

Else if (vc- > read.enabled & & vc- > read.triggered)

Vc- > net_read_io (this, trigger_event- > ethread)

Else if (! vc- > read.enabled) {

Read_ready_list.remove (vc)

}

}

/ / traverse the vc in the writable queue of Handler, call write_to_net to process each vc separately, and the function of write_to_net is to call write to send data, and then notify the upper layer (HttpSM)

While ((vc = write_ready_list.dequeue () {

If (vc- > closed)

Close_UnixNetVConnection (vc, trigger_event- > ethread)

Else if (vc- > write.enabled & & vc- > write.triggered)

Write_to_net (this, vc, trigger_event- > ethread)

Else if (! vc- > write.enabled) {

Write_ready_list.remove (vc)

}

}

Return EVENT_CONT

}

Don't forget the structure of InactivityCop

Main ()-- > UnixNetProcessor::start ()-- > initialize_thread_for_net ()-> InactivityCop ()

Set handler to InactivityCop::check_inactivity, which is called once per second

Struct InactivityCop: public Continuation {

InactivityCop (ProxyMutex * m): Continuation (m) {

SET_HANDLER & InactivityCop::check_inactivity)

}

Main ()-- > UnixNetProcessor::start ()-- > initialize_thread_for_net ()-- > InactivityCop ()-- > InactivityCop::check_inactivity ()

Int check_inactivity (int event, Event * e) {

(void) event

Ink_hrtime now = ink_get_hrtime ()

NetHandler * nh = get_NetHandler (this_ethread ())

/ / traverse the link queue of NetHandler to determine whether the thread is the same thread, and if so, add it to the cop_list queue of NetHandler

Forl_LL (UnixNetVConnection, vc, nh- > open_list) {

If (vc- > thread = = this_ethread ())

Nh- > cop_list.push (vc)

}

While (UnixNetVConnection * vc = nh- > cop_list.pop ()) {

/ / If we cannot ge tthe lock don't stop just keep cleaning

MUTEX_TRY_LOCK (lock, vc- > mutex, this_ethread ())

If (! lock.lock_acquired) {

NET_INCREMENT_DYN_STAT (inactivity_cop_lock_acquire_failure_stat)

Continue

}

/ / if the link vc is set to be closed, call close_UnixNetVConnection () to close the operation.

If (vc- > closed) {

Close_UnixNetVConnection (vc, e-> ethread)

Continue

}

If (vc- > next_inactivity_timeout_at & & vc- > next_inactivity_timeout_at

< now) //调用vc的handler(UnixNetVConnection::mainEvent)进行处理 vc->

HandleEvent (EVENT_IMMEDIATE, e)

}

Return 0

}

All right, so far, the startup process of NetProcessor has been analyzed, and it can be simply summarized as follows: the main task of NetProcessor startup is to initialize several threads to call epoll_wait () to wait for read and write events, if there are read events, call read for read operation, and then transmit the read data to the upper layer for processing. If a write event arrives, call write to send the operation, and notify the upper layer to send the result after sending. So how did the read-write event come from? According to the experience of network programming, server usually requires accept before read and write, and the read and write events here come from accept. Let's analyze the accept of NetProcessor. NetProcessor's accept is called when HttpProxyServer starts, which is, correctly, the main_accept () function.

Main ()-- > start_HttpProxyServer ()

Void

Start_HttpProxyServer ()

{

/ / create an Acceptor for each port (default is only one: 8080) according to configuration

For (int I = 0, n = proxy_ports.length (); I

< n ; ++i ) { HttpProxyAcceptor& acceptor = HttpProxyAcceptors[i]; HttpProxyPort& port = proxy_ports[i]; ...... if (NULL == netProcessor.main_accept(acceptor._accept, port.m_fd, acceptor._net_opt)) return; } ...... } } main()--->

Start_HttpProxyServer ()-- > NetProcessor::main_accept ()

Action *

NetProcessor::main_accept (Continuation * cont, SOCKET fd, AcceptOptions const& opt)

{

UnixNetProcessor* this_unp = static_cast (this)

Debug ("iocore_net_processor", "NetProcessor::main_accept-port% d, send_bufsize% d, sockopt 0x%0x"

Opt.local_port, opt.recv_bufsize, opt.send_bufsize, opt.sockopt_flags)

/ / call UnixNetProcessor::accept_internal () directly

Return this_unp- > accept_internal (cont, fd, opt)

}

Main ()-- > start_HttpProxyServer ()-- > NetProcessor::main_accept ()-> UnixNetProcessor::accept_internal ()

Action *

UnixNetProcessor::accept_internal (Continuation * cont, int fd, AcceptOptions const& opt)

{

EventType et = opt.etype

/ / create a NetAccept instance

NetAccept * na = createNetAccept ()

EThread * thread = this_ethread ()

ProxyMutex * mutex = thread- > mutex

Int accept_threads = opt.accept_threads

IpEndpoint accept_ip

UpgradeEtype (et)

/ / opt is the network-related configuration read from the configuration. As an option for Net, please see the ATS configuration instructions for specific configuration.

If (opt.accept_threads

< 0) { REC_ReadConfigInteger(accept_threads, "proxy.config.accept_threads"); } NET_INCREMENT_DYN_STAT(net_accepts_currently_open_stat); //根据配置的模式设置server的地址 if (opt.localhost_only) { accept_ip.setToLoopback(opt.ip_family); } else if (opt.local_ip.isValid()) { accept_ip.assign(opt.local_ip); } else { accept_ip.setToAnyAddr(opt.ip_family); } ink_assert(0 < opt.local_port && opt.local_port < 65536); accept_ip.port() = htons(opt.local_port); na->

Accept_fn = net_accept

Na- > server.fd = fd

Ats_ip_copy (& na- > server.accept_addr, & accept_ip)

Na- > server.f_inbound_transparent = opt.f_inbound_transparent

/ / transparent proxy

If (opt.f_inbound_transparent) {

Debug ("http_tproxy", "Marking accept server p on port d as inbound transparent", na, opt.local_port)

}

Int should_filter_int = 0

Na- > server.http_accept_filter = false

/ / check the description of the configuration item. Accept will only occur when the data arrives. By default, it will be abandoned after 45 seconds. This is achieved by calling setsockopt () below by setting the option of socket.

REC_ReadConfigInteger (should_filter_int, "proxy.config.net.defer_accept")

If (should_filter_int > 0 & & opt.etype = = ET_NET)

Na- > server.http_accept_filter = true

Na- > action_ = NEW (new NetAcceptAction ())

* na- > action_ = cont;// points to the upper continuation, such as HttpAccept

/ / the following are some parameters for initializing the receiving network such as buffer size

Na- > action_- > server = & na- > server

Na- > callback_on_open = opt.f_callback_on_open

Na- > recv_bufsize = opt.recv_bufsize

Na- > send_bufsize = opt.send_bufsize

Na- > sockopt_flags = opt.sockopt_flags

Na- > packet_mark = opt.packet_mark

Na- > packet_tos = opt.packet_tos

Na- > etype = opt.etype

Na- > backdoor = opt.backdoor

If (na- > callback_on_open)

Na- > mutex = cont- > mutex

/ / receive in real time

If (opt.frequent_accept) {

/ / number of configured accept threads

If (accept_threads > 0) {

/ / set options for socket

If (0 = = na- > do_listen (BLOCKING, opt.f_inbound_transparent)) {

NetAccept * a

/ / create a NetAccept instance for each thread, assign the na created above to it, and finally call the init_accept_loop () function of NetAccept to enter the state of cyclic accept

For (int iTunes 1; I

< accept_threads; ++i) { a = createNetAccept(); *a = *na; a->

Init_accept_loop ()

Debug ("iocore_net_accept", "Created accept thread #% d for port d", I, ats_ip_port_host_order (& accept_ip))

}

Debug ("iocore_net_accept", "Created accept thread #% d for port d", accept_threads, ats_ip_port_host_order (& accept_ip))

Na- > init_accept_loop ()

}

} else {

Na- > init_accept_per_thread ()

}

} else

Na- > init_accept ()

/ / check the description of the configuration item. Accept will only occur when the data arrives. By default, it will be abandoned after 45 seconds. This is achieved by calling setsockopt () below by setting the option of socket.

# ifdef TCP_DEFER_ACCEPT

If (should_filter_int > 0) {

Setsockopt (na- > server.fd, IPPROTO_TCP, TCP_DEFER_ACCEPT, & should_filter_int, sizeof (int))

}

# endif

Return na- > action_

}

Main ()-- > start_HttpProxyServer ()-- > NetProcessor::main_accept ()-- > UnixNetProcessor::accept_internal ()-- > NetAccept::init_accept_loop ()

The function of this function is to create a thread and set the thread's execution function to NetAccept::acceptLoopEvent

Void

NetAccept::init_accept_loop ()

{

Size_t stacksize

/ / Thread stack size

REC_ReadConfigInteger (stacksize, "proxy.config.thread.default.stacksize")

SET_CONTINUATION_HANDLER (this, & NetAccept::acceptLoopEvent)

EventProcessor.spawn_thread (this, "[ACCEPT]", stacksize)

}

Main ()-> start_HttpProxyServer ()-- > NetProcessor::main_accept ()-- > UnixNetProcessor::accept_internal ()-- > NetAccept::init_accept_loop ()-- > NetAccept::acceptLoopEvent ()

Int

NetAccept::acceptLoopEvent (int event, Event * e)

{

(void) event

(void) e

EThread * t = this_ethread ()

/ / Oh, my God, I finally see the endless cycle in accept (of course listen is already in front of using accept)

While (1)

Do_blocking_accept (t)

NET_DECREMENT_DYN_STAT (net_accepts_currently_open_stat)

Delete this

Return EVENT_DONE

}

Main ()-> start_HttpProxyServer ()-- > NetProcessor::main_accept ()-- > UnixNetProcessor::accept_internal ()-- > NetAccept::init_accept_loop ()-- > NetAccept::acceptLoopEvent ()-> NetAccept::do_blocking_accept ()

The function of this function is to call accept cyclically to receive the request and have the event system schedule to process each link UnixNetVConnection

Int

NetAccept::do_blocking_accept (EThread * t)

{

Int res = 0

Int loop = accept_till_done

UnixNetVConnection * vc = NULL

Do {

/ / create an instance that represents a linked UnixNetVConnection

Vc = (UnixNetVConnection *) alloc_cache

If (likely (! vc)) {

Vc = allocateGlobal ()

Vc- > from_accept_thread = true

Vc- > id = net_next_connection_number ()

Alloc_cache = vc

}

/ / flow control

Ink_hrtime now = ink_get_hrtime ()

While (! backdoor & & check_net_throttle (ACCEPT, now)) {

Check_throttle_warning ()

If (! unix_netProcessor.throttle_error_message) {

Safe_delay (NET_THROTTLE_DELAY)

} else if (send_throttle_message (this))

< 0) { goto Lerror; } now = ink_get_hrtime(); } //调用accept去接收请求 if ((res = server.accept(&vc->

Con))

< 0) { //错误处理 Lerror: int seriousness = accept_error_seriousness(res); if (seriousness >

= 0) {

If (! seriousness)

Check_transient_accept_error (res)

Safe_delay (NET_THROTTLE_DELAY)

Return 0

}

If (! action_- > cancelled) {

MUTEX_LOCK (lock, action_- > mutex, t)

Action_- > continuation- > handleEvent (EVENT_ERROR, (void *) (intptr_t) res)

MUTEX_UNTAKE_LOCK (action_- > mutex, t)

Warning ("accept thread received fatal error: errno =% d", errno)

}

Return-1

}

/ / flow control

Check_emergency_throttle (vc- > con)

Alloc_cache = NULL

NET_SUM_GLOBAL_DYN_STAT (net_connections_currently_open_stat, 1)

/ / set the time of vc and the ip address of server

Vc- > submit_time = now

Ats_ip_copy (& vc- > server_addr, & vc- > con.addr)

/ / transparent proxy flag bit

Vc- > set_is_transparent (server.f_inbound_transparent)

Vc- > mutex = new_ProxyMutex ()

Vc- > action_ = * action_

/ / set the handler of UnixNetVConnection to UnixNetVConnection::acceptEvent

SET_CONTINUATION_HANDLER (vc, (NetVConnHandler) & UnixNetVConnection::acceptEvent)

/ / Let the event system schedule the handler execution of vc

EventProcessor.schedule_imm_signal (vc, getEtype ())

} while (loop)

Return 1

}

Main ()-> start_HttpProxyServer ()-- > NetProcessor::main_accept ()-- > UnixNetProcessor::accept_internal ()-- > NetAccept::init_accept_loop ()-- > NetAccept::acceptLoopEvent ()-- > NetAccept::do_blocking_accept ()-- > UnixNetVConnection::acceptEvent ()

The function of this function is to receive a link, register a read and write event with NetHandler, let NetHandler receive the data of the link and send a response message to the request (NetHandler must do this), and finally call the handler (HttpAccept::mainEvent) of the HttpSM to receive the link

Int

UnixNetVConnection::acceptEvent (int event, Event * e)

{

Thread = e-> ethread

MUTEX_TRY_LOCK (lock, get_NetHandler (thread)-> mutex, e-> ethread)

If (! lock) {

If (event = = EVENT_NONE) {

Thread- > schedule_in (this, NET_RETRY_DELAY)

Return EVENT_DONE

} else {

E-> schedule_in (NET_RETRY_DELAY)

Return EVENT_CONT

}

}

If (action_.cancelled) {

Free (thread)

Return EVENT_DONE

}

/ / set the handler of UnixNetVConnection to UnixNetVConnection::mainEvent

SET_HANDLER ((NetVConnHandler) & UnixNetVConnection::mainEvent)

/ / get the pointer to NetHandler, which is described earlier by NetHandler

Nh = get_NetHandler (thread)

/ / get the pointer to PollDescriptor, which is described earlier by PollDescriptor

PollDescriptor * pd = get_PollDescriptor (thread)

/ / register epoll read and write events, which will be related to the above process, right?

If (ep.start (pd, this, EVENTIO_READ | EVENTIO_WRITE)

< 0) { Debug("iocore_net", "acceptEvent : failed EventIO::start\n"); close_UnixNetVConnection(this, e->

Ethread)

Return EVENT_DONE

}

/ / add vc to NetHandler's open-link queue open_list

Nh- > open_list.enqueue (this)

/ / set the appropriate timeout to close the link.

If (inactivity_timeout_in)

UnixNetVConnection::set_inactivity_timeout (inactivity_timeout_in)

If (active_timeout_in)

UnixNetVConnection::set_active_timeout (active_timeout_in)

/ / call the upper handler to handle this link, such as HttpAccept::mainEvent, how to handle it and leave it to the HTTP process for further analysis

Action_.continuation- > handleEvent (NET_EVENT_ACCEPT, this)

Return EVENT_DONE

}

HttpSM

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report