Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of Handler Source Code in Java

2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly introduces the Java Handler source code example analysis, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let Xiaobian take you to understand.

I recognized Handler from a very early time, but at that time I was still shallow in practice, and I didn't understand it deeply enough and I didn't apply it freely. However, with the growth of working hours, I have a deeper understanding of Handler, so I have this blog, hoping to sum up as many knowledge points as possible.

Handler in the Java layer source code has four main classes: Looper, MessageQueue, Message, Handler. I summed up several main points of their knowledge:

Looper:sThreadLocal, Looper.loop ()

Message: data structure, message cache pool

MessageQueue:enqueueMessage, next, pipeline waiting, synchronous message isolation, idleHandler

Handler:send/post, dispatchMessage message processing priority.

LooperLooper data structure static final ThreadLocal sThreadLocal = new ThreadLocal ()

Private static Looper sMainLooper; / / guarded by Looper.class

Final MessageQueue mQueue

/ / sThreadLocal

Private static void prepare (boolean quitAllowed) {

If (sThreadLocal.get ()! = null) {throw Exception...}

SThreadLocal.set (new Looper (quitAllowed))

}

Public static @ Nullable Looper myLooper () {

Return sThreadLocal.get ()

}

/ / sMainLooper

Public static void prepareMainLooper () {

Prepare (false)

Synchronized (Looper.class) {

If (sMainLooper! = null) {throw Exception...}

SMainLooper = myLooper ()

}

}

Public static Looper getMainLooper () {

Synchronized (Looper.class) {

Return sMainLooper

}

}

/ / mQueue

Private Looper (boolean quitAllowed) {

MQueue = new MessageQueue (quitAllowed)

MThread = Thread.currentThread ()

}

Public static @ NonNull MessageQueue myQueue () {

Return myLooper () mQueue

}

SThreadLocal: static constant that ensures that a thread has only one Looper

SMainLooper: static variable that assigns the current thread Looper in prepareMainLooper

MQueue: variable, initialized in the Looper constructor, because a thread has only one Looper, so there is also only one mQueue.

From the above analysis, we can summarize the following characteristics:

Looper and MessageQueue are unique to the thread.

A process has only one sMainLooper

According to the characteristics of ThreadLocal, the Looper of the current thread can be obtained through the myLooper method.

Looper.loop () public static void loop () {

Final Looper me = myLooper ()

Final MessageQueue queue = me.mQueue

For (;;) {

Message msg = queue.next ()

...

Msg.target.dispatchMessage (msg)

...

Msg.recycleUnchecked ()

}

}

Although there may seem to be a lot of Looper.loop () methods, there are three main things he does:

Get the next message from the message queue

Msg.target is handler, which distributes messages through the dispatchMessage method, which will be described below

The message is collected and put into the message cache pool. It is important to note that the Message object is not released and will be cached.

MessageMessage data structure public int what, arg1, arg2

Public Object obj

Public Messenger replyTo

Int flags

Long when; / / message sending time

Bundle data

Handler target

Runnable callback

Message next

Private static final Object sPoolSync = new Object ()

Private static Message sPool

Private static int sPoolSize =

Private static final int MAX_POOL_SIZE = 50

When we see the next variable, we will think of a single linked list. In fact, Message is equivalent to a single linked list. Node,MessageQueue is a single linked list, and it will hold the reference to the header.

What, arg1, arg2, obj and data are some of the messages we send.

It is worth noting that target, which is of type Handler, that is, the Handler of this message, is assigned when the Handler sends the message.

The next four objects are all related to the message cache pool.

Message message cache pool public static Message obtain () {

Synchronized (sPoolSync) {

If (sPool! = null) {

Message m = sPool

SPool = m.next

M.next = null

M.flags =; / / clear in-use flag

SPoolSize--

Return m

}

}

Return new Message ()

}

Void recycleUnchecked () {

Flags = FLAG_IN_USE

What =

Arg1 =

Arg2 =

Obj = null

ReplyTo = null

SendingUid =-1

When =

Target = null

Callback = null

Data = null

Synchronized (sPoolSync) {

If (sPoolSize < MAX_POOL_SIZE) {

Next = sPool

SPool = this

SPoolSize++

}

}

}

In fact, the data structure of the cache pool is also a linked list, with sPool as the link header reference, with a maximum capacity of 50.

When a message is recycled, all parameters in the message are reset and the current message is set as the chain header

When getting the message, return the current chain header and leave next empty.

MessageQueue insert queue boolean enqueueMessage (Message msg, long when) {

Synchronized (this) {

Msg.markInUse ()

Msg.when = when

Message p = mMessages

Boolean needWake

If (p = = null | | when = = | | when < p.when) {

/ / as the header, wake up is required if the queue is blocked.

Msg.next = p

MMessages = msg

NeedWake = mBlocked

} else {

/ / insert the middle of the linked list according to the chronological order

NeedWake = mBlocked & & p.target = = null & msg.isAsynchronous ()

Message prev

For (;;) {

Prev = p

P = p.next

If (p = = null | | when < p.when) {

Break

}

If (needWake & & p.isAsynchronous ()) {

NeedWake = false

}

}

Msg.next = p; / / insert message

Prev.next = msg

}

/ / We can assume mPtr! = 0 because mQuitting is false.

If (needWake) {

NativeWake (mPtr)

}

}

Return true

}

Mainly as a method of inserting queues, it has the following features:

Add the message to the message queue. If the current header is empty, the message is referenced as the header. If it is not empty, it will be inserted into the corresponding time according to the order of time.

NativeWake calls the underlying write operation in the pipeline to wake up. It does not need to wake up when the queue is not blocked.

Also note that the synchronized keyword is used to indicate that message queue insertion is linearly secure and deletion is linearly secure, as we will talk about later.

MessageQueue.next () for (;;) {

NativePollOnce (ptr, nextPollTimeoutMillis)

Synchronized (this) {

Final long now = SystemClock.uptimeMillis ()

Message prevMsg = null

Message msg = mMessages

If (msg! = null & & msg.target = = null) {

/ / if there is synchronous message isolation, asynchronous messages will be looked up first.

Do {

PrevMsg = msg

Msg = msg.next

} while (msg! = null & &! msg.isAsynchronous ())

}

If (msg! = null) {

If (now < msg.when) {

/ / calculate the time from the next message

NextPollTimeoutMillis = (int) Math.min (msg.when-now, Integer.MAX_VALUE)

} else {

/ / Got a message.

MBlocked = false

If (prevMsg! = null) {

PrevMsg.next = msg.next

} else {

MMessages = msg.next

}

Msg.next = null

If (DEBUG) Log.v (TAG, "Returning message:" + msg)

Msg.markInUse ()

Return msg

}

} else {

/ / when there are no more messages, nextPollTimeoutMillis will be set to 1.

NextPollTimeoutMillis =-1

}

...

}

/ / if there is no message and is already idle, idler.queueIdle will be executed.

For (int I =; I < pendingIdleHandlerCount; iTunes +) {

Final IdleHandler idler = mPendingIdleHandlers [I]

MPendingIdleHandlers [I] = null; / / release the reference to the handler

Boolean keep = false

Try {

Keep = idler.queueIdle ()

} catch (Throwable t) {

Log.wtf (TAG, "IdleHandler threw exception", t)

}

If (! keep) {

Synchronized (this) {

MIdleHandlers.remove (idler)

}

}

}

...

}

This method reads the return of the next message from the message queue and mainly does the following:

The nativePollOnce function calls the underlying pipeline operation function. If nextPollTimeoutMillis is-1, it will block. If it is 0, it will not block. If it is greater than 0, it will block for the corresponding time.

If there is synchronous message isolation, asynchronous messages will be looked up first.

Gets the message from the current time queue and returns

If there are no messages in the queue, an idler.queueIdle is executed to notify the listener that the queue is currently idle.

Synchronous message isolation

We mentioned synchronous message isolation above, so let's introduce it here. Synchronous isolation, sometimes called asynchronous messages, means the same thing. In the source code, it is mainly used to update UI first.

Private IdleHandler [] mPendingIdleHandlers

Public int postSyncBarrier () {

Return postSyncBarrier (SystemClock.uptimeMillis ())

}

Private int postSyncBarrier (long when) {

/ / add a message with empty handler to the message queue

Synchronized (this) {

Final int token = mNextBarrierToken++

Final Message msg = Message.obtain ()

Msg.markInUse ()

Msg.when = when

Msg.arg1 = token

Message prev = null

Message p = mMessages

If (when! =) {

While (p! = null & & p.when

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report