Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the use of Binder subsystem in Android system

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces the use of the Binder subsystem in the Android system, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let the editor take you to understand it.

The core of binder system is another way of communication: IPC and RPC. IPC is src A sent directly to des B, while RPC is src A calling des B through remote functions.

1. There are three elements of IPC communication:

1. Source: a

two。 Purpose: B registers led service with servicemanger, A queries led service with servicemanger and gets a handle.

3. Data itself: char buf

2. The RPC communication method is remote function call:

1. Which function is called: the function number of sever

two。 What parameters are passed to it, and the value is returned. Buf transmission over IPC.

Example:LED transmission. In IPC mode, the data is sent directly from A to B, while in RPC mode, the data is encapsulated by led_open and led_ctl, and then sent to B, where led_open,led_ctl is called to retrieve the data again.

Let's start with an overview of the functions of client, servicemanger and server.

Client:

1. Turn on the driver

two。 Get service: query the service from servicemanger and get a handle

3. Send data to handle.

Servicemanger:

1. Turn on the driver

two。 Tell the driver that it is "servicemanger"

3. While (1) {

Read-driven acquisition of data

Parsing data

Call: a. Registration service: record the service name in the linked list

b. Get service: b.1 query for this service in the linked list; b.2 return the handle of "server process".

}

Server:

1. Turn on the driver

two。 Registration service: sending service to servicemanger

3. While (1) {

Read-driven acquisition of data

Parsing data

Call the corresponding function.

}

All three of them work based on the binder driver. Let's first take a look at the service_manger.c file. The mian function is roughly as follows

Int main (int argc, char * * argv) {struct binder_state * bs; bs = binder_open (128-1024); / / corresponds to the first step above. Open the driver if (! bs) {ALOGE ("failed to open binder driver\ n"); return-1;} if (binder_become_context_manager (bs)) {ALOGE ("cannot become context manager (% s)\ n", strerror (errno)); return-1;} selinux_enabled = is_selinux_enabled (); sehandle = selinux_android_service_context_handle () If (selinux_enabled > 0) {if (sehandle = = NULL) {ALOGE ("SELinux: Failed to acquire sehandle. Aborting.\ n "); abort ();} if (getcon (& service_manager_context)! = 0) {ALOGE (" SELinux: Failed to acquire service_manager context. Aborting.\ n "); abort ();}} union selinux_callback cb; cb.func_audit = audit_callback; selinux_set_callback (SELINUX_CB_AUDIT, cb); cb.func_log = selinux_log_callback; selinux_set_callback (SELINUX_CB_LOG, cb); svcmgr_handle = BINDER_SERVICE_MANAGER; / / corresponds to the second step above. Tell the driver that it is ServiceManager binder_loop (bs, svcmgr_handler); / / corresponds to the third step above. What the while loop does return 0;}

Let's take a look at binder.c (corresponding to the server above), where the binder_loop function is in this file. Let's take a look at what the binder_loop function does, code as follows

Void binder_loop (struct binder_state * bs, binder_handler func) {int res; struct binder_write_read bwr; uint32_t readbuf [32]; bwr.write_size = 0; bwr.write_consumed = 0; bwr.write_buffer = 0; readbuf [0] = BC_ENTER_LOOPER; binder_write (bs, readbuf, sizeof (uint32_t)); for (;;) {bwr.read_size = sizeof (readbuf) Bwr.read_consumed = 0; bwr.read_buffer = (uintptr_t) readbuf; res = ioctl (bs- > fd, BINDER_WRITE_READ, & bwr); / / read driver to get data if (res)

< 0) { ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno)); break; } res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func); // 解析数据 if (res == 0) { ALOGE("binder_loop: unexpected reply?!\n"); break; } if (res < 0) { ALOGE("binder_loop: io error %d %s\n", res, strerror(errno)); break; } }} 我们再来看看 bctest.c 文件(对应于上面的 client),code 如下 int main(int argc, char **argv){ int fd; struct binder_state *bs; uint32_t svcmgr = BINDER_SERVICE_MANAGER; uint32_t handle; bs = binder_open(128*1024); if (!bs) { fprintf(stderr, "failed to open binder driver\n"); return -1; } argc--; argv++; while (argc >

0) {if (! strcmp (argv [0], "alt")) {handle = svcmgr_lookup (bs, svcmgr, "alt_svc_mgr"); if (! handle) {fprintf (stderr, "cannot find alt_svc_mgr\ n"); return-1;} svcmgr = handle Fprintf (stderr, "svcmgr is via% x\ n", handle);} else if (! strcmp (argv [0], "lookup")) {if (argc)

< 2) { fprintf(stderr,"argument required\n"); return -1; } handle = svcmgr_lookup(bs, svcmgr, argv[1]); // 获取服务 fprintf(stderr,"lookup(%s) = %x\n", argv[1], handle); argc--; argv++; } else if (!strcmp(argv[0],"publish")) { if (argc < 2) { fprintf(stderr,"argument required\n"); return -1; } svcmgr_publish(bs, svcmgr, argv[1], &token); // 注册服务 argc--; argv++; } else { fprintf(stderr,"unknown command %s\n", argv[0]); return -1; } argc--; argv++; } return 0;} 先来看看 svcmgr_lookup 函数是怎么来获取服务的,code 如下 uint32_t svcmgr_lookup(struct binder_state *bs, uint32_t target, const char *name){ uint32_t handle; unsigned iodata[512/4]; struct binder_io msg, reply; // 构造 binder_io bio_init(&msg, iodata, sizeof(iodata), 4); bio_put_uint32(&msg, 0); // strict mode header bio_put_string16_x(&msg, SVC_MGR_NAME); bio_put_string16_x(&msg, name); if (binder_call(bs, &msg, &reply, target, SVC_MGR_CHECK_SERVICE)) // 获取服务 return 0; handle = bio_get_ref(&reply); if (handle) binder_acquire(bs, handle); binder_done(bs, &msg, &reply); return handle;} 我们看到其中核心函数是 binder_call 函数。再来看看 svcmgr_publish 函数是怎么来注册服务的,code 如下 int svcmgr_publish(struct binder_state *bs, uint32_t target, const char *name, void *ptr){ int status; unsigned iodata[512/4]; struct binder_io msg, reply; bio_init(&msg, iodata, sizeof(iodata), 4); bio_put_uint32(&msg, 0); // strict mode header bio_put_string16_x(&msg, SVC_MGR_NAME); bio_put_string16_x(&msg, name); bio_put_obj(&msg, ptr); if (binder_call(bs, &msg, &reply, target, SVC_MGR_ADD_SERVICE)) // 注册服务 return -1; status = bio_get_uint32(&reply); binder_done(bs, &msg, &reply); return status;} 其中核心函数还是 binder_call 函数。binder_call 函数的参数作用分别是:1、远程调用;2、向谁发送数据;3、调用那个函数;4、提供什么参数;5、返回值。 那么 binder_call 函数中的参数作用如下: 1、bs 是一个结构体, 代表远程调用; 2、msg 中含有服务的名字; 3、reply 中含有servicemanager回复的数据, 表示提供服务的进程; 4、target 代表是的 0,表示servicemanager, (if (target == 0)); 5、SVC_MGR_CHECK_SERVICE 表示要调用servicemanager中的"getservice函数"。 下来我们具体来看看 binder_call 的实现 int binder_call(struct binder_state *bs, struct binder_io *msg, struct binder_io *reply, uint32_t target, uint32_t code){ int res; struct binder_write_read bwr; struct { uint32_t cmd; struct binder_transaction_data txn; } __attribute__((packed)) writebuf; unsigned readbuf[32]; if (msg->

Flags & BIO_F_OVERFLOW) {fprintf (stderr, "binder: txn buffer overflow\ n"); goto fail;} / / Construction parameters writebuf.cmd = BC_TRANSACTION; writebuf.txn.target.handle = target; writebuf.txn.code = code; writebuf.txn.flags = 0; writebuf.txn.data_size = msg- > data-msg- > data0 Writebuf.txn.offsets_size = (char*) msg- > offs)-(char*) msg- > offs0); writebuf.txn.data.ptr.buffer = (uintptr_t) msg- > data0; writebuf.txn.data.ptr.offsets = (uintptr_t) msg- > offs0; bwr.write_size = sizeof (writebuf); bwr.write_consumed = 0; bwr.write_buffer = (uintptr_t) & writebuf; hexdump (msg- > data0, msg- > data-msg- > data0) For (;;) {bwr.read_size = sizeof (readbuf); bwr.read_consumed = 0; bwr.read_buffer = (uintptr_t) readbuf; res = ioctl (bs- > fd, BINDER_WRITE_READ, & bwr); / / call ioctl to send data if (res

< 0) { fprintf(stderr,"binder: ioctl failed (%s)\n", strerror(errno)); goto fail; } res = binder_parse(bs, reply, (uintptr_t) readbuf, bwr.read_consumed, 0); if (res == 0) return 0; if (res < 0) goto fail; }fail: memset(reply, 0, sizeof(*reply)); reply->

Flags | = BIO_F_IOERROR; return-1;}

We see that the construction parameters are in writebuf, and the construction parameters are placed in buf and described in binder_io. First convert binder_io to binder_write_read; and call it in ioctl to send data; finally, convert binder_write_read to binder_io in the binder_parse function.

Let's take a look at how IPC interacts with data. As we said earlier, there are three elements of IPC transmission:

1. Source (oneself)

two。 Purpose: to use handle for "service", that is, to send data to the process that implements the "service"; handle is a reference to the "service".

3. data.

Handle is process A's reference to the service S provided by process B.

Let's explain some of the key words in the above sentence:

To quote, the code is as follows

Struct binder_ref {/ * Lookups needed: * / * node + proc = > ref (transaction) * / / * desc + proc = > ref (transaction, inc/dec ref) * / * node = > refs + procs (proc exit) * / int debug_id; struct rb_node rb_node_desc; struct rb_node rb_node_node; struct hlist_node node_entry; struct binder_proc * proc; struct binder_node * node Uint32_t desc; int strong; int weak; struct binder_ref_death * death;}

We see that there is a binder_node structure in the binder_ref structure, and this binder_node refers to service S. Code is as follows

Struct binder_node {int debug_id; struct binder_work work; union {struct rb_node rb_node; struct hlist_node dead_node;}; struct binder_proc * proc; struct hlist_head refs; int internal_strong_refs; int local_weak_refs; int local_strong_refs; void _ _ user * ptr; void _ _ user * cookie; unsigned has_strong_ref:1 Unsigned pending_strong_ref:1; unsigned has_weak_ref:1; unsigned pending_weak_ref:1; unsigned has_async_transaction:1; unsigned accept_fds:1; unsigned min_priority:8; struct list_head async_todo;}

There is a binder_proc structure in the binder_node structure, and this binder_proc refers to process B. Code is as follows

Struct binder_proc {struct hlist_node proc_node; struct rb_root threads; struct rb_root nodes; struct rb_root refs_by_desc; struct rb_root refs_by_node; int pid; struct vm_area_struct * vma; struct mm_struct * vma_vm_mm; struct task_struct * tsk; struct files_struct * files; struct hlist_node deferred_work_node; int deferred_work; void * buffer Ptrdiff_t user_buffer_offset; struct list_head buffers; struct rb_root free_buffers; struct rb_root allocated_buffers; size_t free_async_space; struct page * * pages; size_t buffer_size; uint32_t buffer_free; struct list_head todo; wait_queue_head_t wait; struct binder_stats stats; struct list_head delivered_death; int max_threads; int requested_threads Int requested_threads_started; int ready_threads; long default_priority; struct dentry * debugfs_entry;}

There is a threads structure in the binder_proc structure, and this threads refers to multithreading. Code is as follows

Struct binder_thread {struct binder_proc * proc; struct rb_node rb_node; int pid; int looper; struct binder_transaction * transaction_stack; struct list_head todo; uint32_t return_error; / * Write failed, return error code in read buf * / uint32_t return_error2; / * Write failed, return error code in read * / / * buffer. Used when sending a reply to a dead process that * / * we are also waiting on * / wait_queue_head_t wait; struct binder_stats stats;}

Now we know how multithreading transmits information.

Server passes a flat_binder_object to the driver:

1. Create a binder_node for each service in the kernel state driver. Binder_node.proc = server process

2. Service_manger creates binder_ref in the driver and references binder_node. Binder_ref.desc = 1, 2, 3... Create a service linked list (name,handle) in user mode, and handle refers to the previous binder_ref.desc

3. Client inquires the service from service_manger and sends name

4. Service_manger returns handle to the driver

5. The driver finds binder_ref based on handle in service_manger 's binder_ref red-black tree, finds binder_node based on binder_ref.node, and finally creates a new binder_ref for client (its desc starts at 1). The driver returns desc to client, which is called handle

6. Client: the driver finds binder_ref based on handle, binder_node based on binder_ref, and server process based on binder_node.

Let's take a look at the data transfer process (process switching).

From client to server, write first and then read:

1. Client constructs data and calls ioctl to send data.

two。 Find the server process according to handle in the driver

3. Put data into the binder_proc.todo of the process

4. Dormancy

5. Be awakened

6. Take the data from the todo linked list and return to user space.

On the server side, read before writing:

1. Read data hibernation

two。 Be awakened

3. Take the data from the todo linked list and return to user space

4. Processing data

5. Write the result to client, that is, put it into client's binder_proc.todo linked list to wake up client.

So in general, how is the data replicated? The general method requires 2 copies.

1. Client construction data

two。 Driver: copy_from_user

3. Server:3.1 driver, copy_to_user

3.2 user mode processing

Binder replicates data only once.

1. Server performs mmap mapping, and user mode can directly access a block of memory in the driver.

2. Client construction data, driven by copy_from_user

3. Server can use data directly in user mode.

But it is worth noting that in the binder method, there is a data that needs to be copied twice from the test_client to the test_server side. In ioctl, the binder_write_read structure first copy_from_user to a memory local variable, and then copy_to_user to the test_server side. Other data is from the test_cliet side copy_from_user to the kernel memory, and then the test_server side can access the kernel memory directly through the mmap without having to copy it through copy_to_user. Therefore, the efficiency of binder system can be doubled when communicating.

Next, let's take a look at the service registration process. Let's first take a look at the driver framework of binder. We see in the binder_init function that it is registered with misc_register, indicating that it is a misc device driver. By registering the binder_miscdev structure to call the binder_fops structure, the binder_fops structure contains entry functions for binder to drive various operations. The specific code is as follows

Static int _ _ init binder_init (void) {int ret; binder_deferred_workqueue = create_singlethread_workqueue ("binder"); if (! binder_deferred_workqueue) return-ENOMEM; binder_debugfs_dir_entry_root = debugfs_create_dir ("binder", NULL) If (binder_debugfs_dir_entry_root) binder_debugfs_dir_entry_proc = debugfs_create_dir ("proc", binder_debugfs_dir_entry_root); ret = misc_register (& binder_miscdev) If (binder_debugfs_dir_entry_root) {debugfs_create_file ("state", S_IRUGO, binder_debugfs_dir_entry_root, NULL, & binder_state_fops) Debugfs_create_file ("stats", S_IRUGO, binder_debugfs_dir_entry_root, NULL, & binder_stats_fops) Debugfs_create_file ("transactions", S_IRUGO, binder_debugfs_dir_entry_root, NULL, & binder_transactions_fops) Debugfs_create_file ("transaction_log", S_IRUGO, binder_debugfs_dir_entry_root, & binder_transaction_log, & binder_transaction_log_fops) Debugfs_create_file ("failed_transaction_log", S_IRUGO, binder_debugfs_dir_entry_root, & binder_transaction_log_failed, & binder_transaction_log_fops);} return ret;}

The binder_miscdev code is as follows

Static struct miscdevice binder_miscdev = {.minor = MISC_DYNAMIC_MINOR, .name = "binder", .fops = & binder_fops}

The binder_fops code is as follows

Static const struct file_operations binder_fops = {.owner = THIS_MODULE, .poll = binder_poll, .unloaded _ ioctl = binder_ioctl, .mmap = binder_mmap, .open = binder_open, .flush = binder_flush, .release = binder_release,}

In service_manger, open binder driver, followed by ioctl, and finally mmap. The code is as follows

Struct binder_state * binder_open (size_t mapsize) {struct binder_state * bs; struct binder_version vers; bs = malloc (sizeof (* bs)); if (! bs) {errno = ENOMEM; return NULL;} bs- > fd = open ("/ dev/binder", O_RDWR); if (bs- > fd)

< 0) { fprintf(stderr,"binder: cannot open device (%s)\n", strerror(errno)); goto fail_open; } if ((ioctl(bs->

Fd, BINDER_VERSION, & vers) =-1) | (vers.protocol_version! = BINDER_CURRENT_PROTOCOL_VERSION) {fprintf (stderr, "binder: driver version differs from user space\ n"); goto fail_open;} bs- > mapsize = mapsize; bs- > mapped = mmap (NULL, mapsize, PROT_READ, MAP_PRIVATE, bs- > fd, 0) If (bs- > mapped = = MAP_FAILED) {fprintf (stderr, "binder: cannot map device (% s)\ n", strerror (errno)); goto fail_map;} return bs;fail_map: close (bs- > fd); fail_open: free (bs); return NULL;}

After doing this, service_manger enters the binder_loop loop. In the binder_loop function, BC_ENTER_LOOPER is stored in readbuf, followed by ioctl BINDER_WRITE_READ, followed by binder_parse parsing. The code is as follows

Void binder_loop (struct binder_state * bs, binder_handler func) {int res; struct binder_write_read bwr; uint32_t readbuf [32]; bwr.write_size = 0; bwr.write_consumed = 0; bwr.write_buffer = 0; readbuf [0] = BC_ENTER_LOOPER; binder_write (bs, readbuf, sizeof (uint32_t)); for (;;) {bwr.read_size = sizeof (readbuf) Bwr.read_consumed = 0; bwr.read_buffer = (uintptr_t) readbuf; res = ioctl (bs- > fd, BINDER_WRITE_READ, & bwr); / / read driver to get data if (res)

< 0) { ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno)); break; } res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func); // 解析数据 if (res == 0) { ALOGE("binder_loop: unexpected reply?!\n"); break; } if (res < 0) { ALOGE("binder_loop: io error %d %s\n", res, strerror(errno)); break; } }} binder_write 中传入了 BC_ENTER_LOOPER,看看它做的是那些事情,代码如下 int binder_write(struct binder_state *bs, void *data, size_t len){ struct binder_write_read bwr; int res; bwr.write_size = len; bwr.write_consumed = 0; bwr.write_buffer = (uintptr_t) data; bwr.read_size = 0; bwr.read_consumed = 0; bwr.read_buffer = 0; res = ioctl(bs->

Fd, BINDER_WRITE_READ, & bwr); if (res

< 0) { fprintf(stderr,"binder_write: ioctl failed (%s)\n", strerror(errno)); } return res;} 我们看到它先是构造了 binder_write_read 结构体,再通过 binder_ioctl 函数发送了 BINDER_WRITE_READ 指令。我们再去 binder_ioctl 函数中看看 BINDER_WRITE_READ 操作做了哪些事情。代码如下 static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg){ int ret; struct binder_proc *proc = filp->

Private_data; struct binder_thread * thread; unsigned int size = _ IOC_SIZE (cmd); void _ _ user * ubuf = (void _ _ user *) arg; / * printk (KERN_INFO "binder_ioctl:% d% x% lx\ n", proc- > pid, current- > pid, cmd, arg); * / ret = wait_event_interruptible (binder_user_error_wait, binder_stop_on_user_error)

< 2); if (ret) return ret; binder_lock(__func__); thread = binder_get_thread(proc); if (thread == NULL) { ret = -ENOMEM; goto err; } switch (cmd) { case BINDER_WRITE_READ: { struct binder_write_read bwr; if (size != sizeof(struct binder_write_read)) { ret = -EINVAL; goto err; } if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { ret = -EFAULT; goto err; } binder_debug(BINDER_DEBUG_READ_WRITE, "binder: %d:%d write %ld at lx, read %ld at lx\n", proc->

Pid, thread- > pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer); if (bwr.write_size > 0) {ret = binder_thread_write (proc, thread, (void _ user *) bwr.write_buffer, bwr.write_size, & bwr.write_consumed); if (ret

< 0) { bwr.read_consumed = 0; if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto err; } } if (bwr.read_size >

0) {ret = binder_thread_read (proc, thread, (void _ user *) bwr.read_buffer, bwr.read_size, & bwr.read_consumed, filp- > f_flags & O_NONBLOCK); if (! list_empty (& proc- > todo)) wake_up_interruptible (& proc- > wait); if (ret

< 0) { if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto err; } } binder_debug(BINDER_DEBUG_READ_WRITE, "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n", proc->

Pid, thread- > pid, bwr.write_consumed, bwr.write_size, bwr.read_consumed, bwr.read_size); if (copy_to_user (ubuf, & bwr, sizeof (bwr) {ret =-EFAULT; goto err;} break } case BINDER_SET_MAX_THREADS: if (copy_from_user (& proc- > max_threads, ubuf, sizeof (proc- > max_threads)) {ret =-EINVAL; goto err;} break Case BINDER_SET_CONTEXT_MGR: if (binder_context_mgr_node! = NULL) {printk (KERN_ERR "binder: BINDER_SET_CONTEXT_MGR already set\ n"); ret =-EBUSY; goto err;} ret = security_binder_set_context_mgr (proc- > tsk); if (ret

< 0) goto err; if (binder_context_mgr_uid != -1) { if (binder_context_mgr_uid != current->

Cred- > euid) {printk (KERN_ERR "binder: BINDER_SET_"CONTEXT_MGR bad uid% d! =% d\ n", current- > cred- > euid, binder_context_mgr_uid); ret =-EPERM; goto err }} else binder_context_mgr_uid = current- > cred- > euid; binder_context_mgr_node = binder_new_node (proc, NULL, NULL); if (binder_context_mgr_node = = NULL) {ret =-ENOMEM; goto err;} binder_context_mgr_node- > local_weak_refs++ Binder_context_mgr_node- > local_strong_refs++; binder_context_mgr_node- > has_strong_ref = 1; binder_context_mgr_node- > has_weak_ref = 1; break; case BINDER_THREAD_EXIT: binder_debug (BINDER_DEBUG_THREADS, "binder:% d exit\ n", proc- > pid, thread- > pid) Binder_free_thread (proc, thread); thread = NULL; break; case BINDER_VERSION: if (size! = sizeof (struct binder_version)) {ret =-EINVAL; goto err;} if (put_user (BINDER_CURRENT_PROTOCOL_VERSION, & (struct binder_version *) ubuf)-> protocol_version)) {ret =-EINVAL Goto err;} break; default: ret =-EINVAL; goto err;} ret = 0 BINDER_LOOPER_STATE_NEED_RETURN; binder_unlock err: if (thread) thread- > looper & = ~ BINDER_LOOPER_STATE_NEED_RETURN; binder_unlock (_ _ func__); wait_event_interruptible (binder_user_error_wait, binder_stop_on_user_error)

< 2); if (ret && ret != -ERESTARTSYS) printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->

Pid, current- > pid, cmd, arg, ret); return ret;}

We see that we first construct a binder_write_read structure, and then use the copy_from_user function to copy the user-state data to the kernel (driver). If there is a need to write data to the thread, use binder_thread_write to write to the thread, and the same is true for read operations. Finally, the binder_write_read structure is written back to the user layer. For all read operations, the header is BR_NOOP. So for this kind of data header processing, the binder_parse function is direct break, do dormancy processing.

For test_server, first binder_open, that is, open binder driver, followed by ioctl, and finally mmap. Then the while loop, if we pass in lookup, it will call svcmgr_lookup to get the service; if it is publish, it will invoke the svcmgr_publish registration service.

In general, test_server first sends BC_TRANSACTION through the binder_thread_write function, then calls the binder_thread_read function to get a BR_NOOP and waits for sleep. Then service_manger gets the BR_TRANSACTION through binder_thread_read, sends a BC_REPLY through binder_thread_write, and finally test_server gets the BR_REPLY through binder_thread_read.

Let's focus on the BC_TRANSACTION of the binder_thread_write function:

1. Construction data:

a. Construct binder_io

b. Convert to binder_transaction_data

c. Into the binder_write_read structure.

two。 Send data over ioctl

3. Get in there and drive. Binder_ioctl puts the data into the todo linked list of the service_manger process and wakes him up.

a. Find the destination process service_manger according to handle (the space previously mapped by mmap)

b. Put the data copy_from_user into the mmap space

c. Deal with offset data, flat_binder_object: construct binder_node to test_server, construct binder_ref to service_manger, increase reference count.

d. A striking process.

Since then, it has been in the back-and-forth between binder_thread_write and binder_thread_read of test_server and service_manger processes.

Of the cmd involved, only BC_TRANSACTION,BR_TRANSACTION,BC_REPLY and BR_REPLY involve two processes, and all other cmd are just interactions between APP and drivers for changing / reporting status.

Let's summarize the registration process and acquisition process of the service.

The service registration process is as follows:

1. Construction data, including name = "hello" and flat_binder_node structures

two。 Send ioctl

3. Find the service_manger process according to handle = 0, and put the data in the todo linked list of service_manger

4. Construct the structure. Binder_node to the source process, binder_ref to the destination process

5. Wake up service_manger

6. Call the ADD_SERVICE function

7. Create an item in svclist (mainly name = "hello" and handle)

8. Binder_ref refers to the service, and the node in this case points to binder_node.

The above 1 and 2 are done in the user state of test_server, 345 in the kernel state of test_server, 67 in the user state of service_manger, and 8 in the kernel state of service_manger.

The service acquisition process is as follows:

1. Construction data (name = "hello")

two。 Send data to service_manger,handle through ioctl = 0

3. According to handle = 0, find service_manger and put the data in his todo linked list

4. Wake up service_manger

5. Service_manger kernel state returns data

6. Service_manger user mode fetches data and gets hello service

7. Find an item in the svclist linked list based on the hello service name and get handle = 1

8. Send the handle to the driver with ioctl

9. In the refs_by_desc tree of service_manger kernel, find the binder_ref according to handle = 1, and then find the binder_node of the hello service.

10. Create a binder_ref for test_client and put handle = 1 in test_cient 's todo linked list

11. Wake up tes_client

12. Test_client kernel state returns handle = 1

13. The test_client user state gets handle = 1, and then binder_ref.desc = 1, and the node in it corresponds to the previous hello service.

The above 1213 is done in the user mode of test_client, 3412 in the kernel state of test_client, 6 7 8 in the user mode of service_manger, and 5 9 10 11 in the kernel state of service_manger.

Let's take a look at the service usage process, which is similar to the registration and acquisition process.

1. Get the "hello" service, handle = 1

two。 Construction data, code refers to which function to call, construction parameters

3. Send data via ioctl (write first and then read)

4. Binder_ioctl, find the destination process according to handle, namely test_server

5. Put the data into test_server 's todo linked list

6. Wake up test_server and then hibernate in binder_thread_read

7. Test_server kernel state is awakened and data is returned to test_server user state

8. Test_server user mode takes out the data and calls the function according to code and parameters

9. Construct data with return values

10. Reply to REPLY via ioctl

11. Test_server kernel state to find the process to reply, that is, test_client

twelve。 Put the data into test_client 's todo linked list

13. Wake up test_client

14. The kernel state is awakened and the data is fouled to the user space.

15. The returned value is extracted in test_client user mode, and the use process is completed.

The above 12315 is done in the user state of test_client, 45614 in the kernel state of test_client, 8 9 10 in the user state of test_server, and 7 11 12 13 in the kernel state of test_server.

Thank you for reading this article carefully. I hope the article "what is the use of the Binder subsystem in the Android system" shared by the editor will be helpful to you. At the same time, I also hope that you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report