Binder Subsystem for Android System

Keywords: Linux Android SELinux Hibernate

Today, let's take a look at Binder, a subsystem of the Android system.Let's start with the Binder subsystem, which is one of the methods used for interprocess communication in Android systems.So why learn the binder system?With the binder subsystem in hand, we can have a good understanding of how Android system processes communicate with each other and how binder-based client s and server s communicate in one step.Let's start with the binder subsystem.

The core of the binder system is another form of communication: IPC and RPC.IPC is sent directly to des B by src A, while RPC is called by src A through a remote function.

1. The way IPC communicates has three elements:

1. Send source: A;

2. Purpose: B registers led service with servicemanger, A queries led service with servicemanger and gets a handle;

3. Data itself: char buf[512];

2. RPC communication is through remote function calls:

1. Which function was called: sever's function number;

2. What parameters are passed to it and what values are returned.buf transmission over IPC.

example: LED transmission.IPC mode sends data directly from A to B; RPC mode encapsulates data by led_open, led_ctl, then sends it to B, calls led_open on B, and led_ctl fetches data again.


Let's start with an overview of the three roles of client, servicemanger and server.

        client:

1. Turn on the driver;

2. Get service: Query service manger for a handle;

3. Send data to the handle.


        servicemanger:

1. Turn on the driver;

2. Tell the driver that it is a "servicemanger";

            3. while(1) {

Read-driven data acquisition;

Parse data;

Call: a. Register service: record service name in the chain list;

b. Get services: b.1 queries the list for this service; b.2 returns the handle of the server process.

             };


        server:

1. Turn on the driver;

2. Register services: Send services to servicemanger;

            3. while(1) {

Read-driven data acquisition;

Parse data;

Call the corresponding function.

             };


All three are based on a binder driver.Let's start with the service_manger.c file, where the mian function looks like this

int main(int argc, char **argv)
{
    struct binder_state *bs;

    bs = binder_open(128*1024);      //Correspond to the first step above.Turn on the driver
    if (!bs) {
        ALOGE("failed to open binder driver\n");
        return -1;
    }

    if (binder_become_context_manager(bs)) {
        ALOGE("cannot become context manager (%s)\n", strerror(errno));
        return -1;
    }

    selinux_enabled = is_selinux_enabled();
    sehandle = selinux_android_service_context_handle();

    if (selinux_enabled > 0) {
        if (sehandle == NULL) {
            ALOGE("SELinux: Failed to acquire sehandle. Aborting.\n");
            abort();
        }

        if (getcon(&service_manager_context) != 0) {
            ALOGE("SELinux: Failed to acquire service_manager context. Aborting.\n");
            abort();
        }
    }

    union selinux_callback cb;
    cb.func_audit = audit_callback;
    selinux_set_callback(SELINUX_CB_AUDIT, cb);
    cb.func_log = selinux_log_callback;
    selinux_set_callback(SELINUX_CB_LOG, cb);

    svcmgr_handle = BINDER_SERVICE_MANAGER;     //Correspond to the second step above.Tell the driver that it is a ServiceManager
    binder_loop(bs, svcmgr_handler);                  //Correspond to the third step above.while loop what you do

    return 0;
}

Let's look at binder.c (corresponding to the server above), where the binder_loop function is located.Let's look at what the binder_loop function does, and the code is as follows

void binder_loop(struct binder_state *bs, binder_handler func)
{
    int res;
    struct binder_write_read bwr;
    uint32_t readbuf[32];

    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;

    readbuf[0] = BC_ENTER_LOOPER;
    binder_write(bs, readbuf, sizeof(uint32_t));

    for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (uintptr_t) readbuf;

        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);                            //Read Driven Get Data

        if (res < 0) {
            ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
            break;
        }

        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func); //Parsing data
        if (res == 0) {
            ALOGE("binder_loop: unexpected reply?!\n");
            break;
        }
        if (res < 0) {
            ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
            break;
        }
    }
}

* Let's look again at the bctest.c file (corresponding to the client above) with the following code

int main(int argc, char **argv)
{
    int fd;
    struct binder_state *bs;
    uint32_t svcmgr = BINDER_SERVICE_MANAGER;
    uint32_t handle;

    bs = binder_open(128*1024);
    if (!bs) {
        fprintf(stderr, "failed to open binder driver\n");
        return -1;
    }

    argc--;
    argv++;
    while (argc > 0) {
        if (!strcmp(argv[0],"alt")) {
            handle = svcmgr_lookup(bs, svcmgr, "alt_svc_mgr");
            if (!handle) {
                fprintf(stderr,"cannot find alt_svc_mgr\n");
                return -1;
            }
            svcmgr = handle;
            fprintf(stderr,"svcmgr is via %x\n", handle);
        } else if (!strcmp(argv[0],"lookup")) {
            if (argc < 2) {
                fprintf(stderr,"argument required\n");
                return -1;
            }
            handle = svcmgr_lookup(bs, svcmgr, argv[1]);          //Access Services
            fprintf(stderr,"lookup(%s) = %x\n", argv[1], handle);
            argc--;
            argv++;
        } else if (!strcmp(argv[0],"publish")) {
            if (argc < 2) {
                fprintf(stderr,"argument required\n");
                return -1;
            }
            svcmgr_publish(bs, svcmgr, argv[1], &token);          //Registration Services
            argc--;
            argv++;
        } else {
            fprintf(stderr,"unknown command %s\n", argv[0]);
            return -1;
        }
        argc--;
        argv++;
    }
    return 0;
}

First, let's see how the svcmgr_lookup function gets the service. The code is as follows

uint32_t svcmgr_lookup(struct binder_state *bs, uint32_t target, const char *name)
{
    uint32_t handle;
    unsigned iodata[512/4];
    struct binder_io msg, reply;

    //Construct binder_io
    bio_init(&msg, iodata, sizeof(iodata), 4);
    bio_put_uint32(&msg, 0);  // strict mode header
    bio_put_string16_x(&msg, SVC_MGR_NAME);
    bio_put_string16_x(&msg, name);

    if (binder_call(bs, &msg, &reply, target, SVC_MGR_CHECK_SERVICE))    //Access Services
        return 0;

    handle = bio_get_ref(&reply);

    if (handle)
        binder_acquire(bs, handle);

    binder_done(bs, &msg, &reply);

    return handle;
}

* We see that the core function is the binder_call function.Let's see how the svcmgr_publish function registers services. The code is as follows

int svcmgr_publish(struct binder_state *bs, uint32_t target, const char *name, void *ptr)
{
    int status;
    unsigned iodata[512/4];
    struct binder_io msg, reply;

    bio_init(&msg, iodata, sizeof(iodata), 4);
    bio_put_uint32(&msg, 0);  // strict mode header
    bio_put_string16_x(&msg, SVC_MGR_NAME);
    bio_put_string16_x(&msg, name);
    bio_put_obj(&msg, ptr);

    if (binder_call(bs, &msg, &reply, target, SVC_MGR_ADD_SERVICE))    //Registration Services
        return -1;

    status = bio_get_uint32(&reply);

    binder_done(bs, &msg, &reply);

    return status;
}

The core function is also the binder_call function.The parameters of binder_call function are: 1, remote call; 2, who to send data to; 3, call that function; 4, what parameters to provide; 5, return value.

The parameters in the binder_call function then function as follows:

1. bs is a structure representing remote calls;

2. msg contains the name of the service;

3. The reply contains the data replied by the service manager, indicating the process of providing the service;

4. target means yes 0, for service manager, (if (target == 0));

        5,SVC_MGR_CHECK_SERVICE Indicates that the "getservice function" in the service manager is to be called.

* Let's look at the implementation of binder_call specifically

int binder_call(struct binder_state *bs,
                struct binder_io *msg, struct binder_io *reply,
                uint32_t target, uint32_t code)
{
    int res;
    struct binder_write_read bwr;
    struct {
        uint32_t cmd;
        struct binder_transaction_data txn;
    } __attribute__((packed)) writebuf;
    unsigned readbuf[32];

    if (msg->flags & BIO_F_OVERFLOW) {
        fprintf(stderr,"binder: txn buffer overflow\n");
        goto fail;
    }

    //Construction parameters
    writebuf.cmd = BC_TRANSACTION;
    writebuf.txn.target.handle = target;
    writebuf.txn.code = code;
    writebuf.txn.flags = 0;
    writebuf.txn.data_size = msg->data - msg->data0;
    writebuf.txn.offsets_size = ((char*) msg->offs) - ((char*) msg->offs0);
    writebuf.txn.data.ptr.buffer = (uintptr_t)msg->data0;
    writebuf.txn.data.ptr.offsets = (uintptr_t)msg->offs0;

    bwr.write_size = sizeof(writebuf);
    bwr.write_consumed = 0;
    bwr.write_buffer = (uintptr_t) &writebuf;

    hexdump(msg->data0, msg->data - msg->data0);
    for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (uintptr_t) readbuf;

        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);    //Call ioctl to send data

        if (res < 0) {
            fprintf(stderr,"binder: ioctl failed (%s)\n", strerror(errno));
            goto fail;
        }

        res = binder_parse(bs, reply, (uintptr_t) readbuf, bwr.read_consumed, 0);
        if (res == 0) return 0;
        if (res < 0) goto fail;
    }

fail:
    memset(reply, 0, sizeof(*reply));
    reply->flags |= BIO_F_IOERROR;
    return -1;
}

* We see construction parameters in writebuf, which are in buf and described by binder_io.Convert binder_io to binder_write_read; call it in ioctl to send data; and finally convert binder_write_read to binder_io in the binder_parse function.

Let's see how IPC interacts with data.As we mentioned earlier, there are three elements of IPC transmission:

1. Source (self)

2. Purpose: Use a handle to represent a "service", that is, to send data to the process that implements the service; a handle is a reference to a "service".

3. Data.

handle is process A's reference to service S provided by process B.

Let's explain some of the keywords in the above sentence:

) Reference, code is as follows

struct binder_ref {
    /* Lookups needed: */
    /*   node + proc => ref (transaction) */
    /*   desc + proc => ref (transaction, inc/dec ref) */
    /*   node => refs + procs (proc exit) */
    int debug_id;
    struct rb_node rb_node_desc;
    struct rb_node rb_node_node;
    struct hlist_node node_entry;
    struct binder_proc *proc;
    struct binder_node *node;
    uint32_t desc;
    int strong;
    int weak;
    struct binder_ref_death *death;
};

) We see a binder_node structure in the binder_ref structure, which refers to Service S.The code is as follows

struct binder_node {
    int debug_id;
    struct binder_work work;
    union {
        struct rb_node rb_node;
        struct hlist_node dead_node;
    };
    struct binder_proc *proc;
    struct hlist_head refs;
    int internal_strong_refs;
    int local_weak_refs;
    int local_strong_refs;
    void __user *ptr;
    void __user *cookie;
    unsigned has_strong_ref:1;
    unsigned pending_strong_ref:1;
    unsigned has_weak_ref:1;
    unsigned pending_weak_ref:1;
    unsigned has_async_transaction:1;
    unsigned accept_fds:1;
    unsigned min_priority:8;
    struct list_head async_todo;
};

There is a binder_proc structure in the binder_node structure, which refers to process B.The code is as follows

struct binder_proc {
    struct hlist_node proc_node;
    struct rb_root threads;
    struct rb_root nodes;
    struct rb_root refs_by_desc;
    struct rb_root refs_by_node;
    int pid;
    struct vm_area_struct *vma;
    struct mm_struct *vma_vm_mm;
    struct task_struct *tsk;
    struct files_struct *files;
    struct hlist_node deferred_work_node;
    int deferred_work;
    void *buffer;
    ptrdiff_t user_buffer_offset;

    struct list_head buffers;
    struct rb_root free_buffers;
    struct rb_root allocated_buffers;
    size_t free_async_space;

    struct page **pages;
    size_t buffer_size;
    uint32_t buffer_free;
    struct list_head todo;
    wait_queue_head_t wait;
    struct binder_stats stats;
    struct list_head delivered_death;
    int max_threads;
    int requested_threads;
    int requested_threads_started;
    int ready_threads;
    long default_priority;
    struct dentry *debugfs_entry;
};

There is a threads structure in the binder_proc structure, which refers to multithreading.The code is as follows

struct binder_thread {
    struct binder_proc *proc;
    struct rb_node rb_node;
    int pid;
    int looper;
    struct binder_transaction *transaction_stack;
    struct list_head todo;
    uint32_t return_error; /* Write failed, return error code in read buf */
    uint32_t return_error2; /* Write failed, return error code in read */
        /* buffer. Used when sending a reply to a dead process that */
        /* we are also waiting on */
    wait_queue_head_t wait;
    struct binder_stats stats;
};

Now we know how multithreaded transmission works.

The server passes in a flat_binder_object to the driver:

1. Create a binder_node for each service in the kernel state driver.binder_node.proc = server process

2. service_manger creates binder_ref in the driver, referencing binder_node.binder_ref.desc = 1, 2, 3...; create a service chain list (name, handle) in user state, handle refers to the previous binder_ref.desc

3. client queries service_manger for service and passes name

4. service_manger returns handle to the driver

5. The driver finds binder_ref in the binder_ref red-black tree of service_manger based on handle, then binder_node based on binder_ref.node, and finally creates a new binder_ref for the client (its desc starts from 1).The driver returns desc to the client, which is handle

6. client: Driver finds binder_ref from handle, binder_node from binder_ref, and server process from binder_node.


Next let's look at the data transfer process (process switching)

client to server, write before read:

Client constructs data and calls ioctl to send data;

2. Locate server processes in the driver based on handle s;

3. Put the data into the binder_proc.todo of the process;

4. Hibernate;

5. Wake up;

6. Remove data from the todo list and return to user space.

server side, read before write:

1. Hibernate reading data;

2. Wake up;

3. Remove data from the todo list and return to user space;

4. Processing data;

5. Write the result to the client, which is the binder_proc.todo list put in the client, to wake up the client.


How does data normally replicate?The general method requires two copies.

Client construction data;

Driver: copy_from_user

3. server:3.1 driver, copy_to_user

* 3.2 User-state Processing

binder copies data only once.

1. The server maps mmap s so that the user state can directly access a block of memory in the driver.

Client constructs data, driven by: copy_from_user

3. server can use data directly in user mode.


However, it is worth noting that in the binder method, there is one data that needs to be copied twice from the test_client to the test_server side.In ioctl, the binder_write_read structure copies_from_user to a local memory variable, then cop_to_user to the test_server side.The other data is from copy_from_user on the test_cliet side to kernel memory, which is then accessed directly from mmap on the test_server side without being copied by copy_to_user.Therefore, the binder system can double the efficiency of communication.


Next, let's look at the service registration process, let's look at the driver framework for binder.We see in the binder_init function that it is registered with misc_register, indicating that it is a misc device driver.By registering the binder_miscdev structure to call the binder_fops structure, the binder_fops structure contains the entry functions that binder drives various operations.The code is as follows

static int __init binder_init(void)
{
    int ret;

    binder_deferred_workqueue = create_singlethread_workqueue("binder");
    if (!binder_deferred_workqueue)
        return -ENOMEM;

    binder_debugfs_dir_entry_root = debugfs_create_dir("binder", NULL);
    if (binder_debugfs_dir_entry_root)
        binder_debugfs_dir_entry_proc = debugfs_create_dir("proc",
                         binder_debugfs_dir_entry_root);
    ret = misc_register(&binder_miscdev);
    if (binder_debugfs_dir_entry_root) {
        debugfs_create_file("state",
                    S_IRUGO,
                    binder_debugfs_dir_entry_root,
                    NULL,
                    &binder_state_fops);
        debugfs_create_file("stats",
                    S_IRUGO,
                    binder_debugfs_dir_entry_root,
                    NULL,
                    &binder_stats_fops);
        debugfs_create_file("transactions",
                    S_IRUGO,
                    binder_debugfs_dir_entry_root,
                    NULL,
                    &binder_transactions_fops);
        debugfs_create_file("transaction_log",
                    S_IRUGO,
                    binder_debugfs_dir_entry_root,
                    &binder_transaction_log,
                    &binder_transaction_log_fops);
        debugfs_create_file("failed_transaction_log",
                    S_IRUGO,
                    binder_debugfs_dir_entry_root,
                    &binder_transaction_log_failed,
                    &binder_transaction_log_fops);
    }
    return ret;
}

The binder_miscdev code is as follows

static struct miscdevice binder_miscdev = {
    .minor = MISC_DYNAMIC_MINOR,
    .name = "binder",
    .fops = &binder_fops
};

The binder_fops code is as follows

static const struct file_operations binder_fops = {
    .owner = THIS_MODULE,
    .poll = binder_poll,
    .unlocked_ioctl = binder_ioctl,
    .mmap = binder_mmap,
    .open = binder_open,
    .flush = binder_flush,
    .release = binder_release,
};

In service_manger, open the binder driver, followed by ioctl, and finally mmap.The code is as follows

struct binder_state *binder_open(size_t mapsize)
{
    struct binder_state *bs;
    struct binder_version vers;

    bs = malloc(sizeof(*bs));
    if (!bs) {
        errno = ENOMEM;
        return NULL;
    }

    bs->fd = open("/dev/binder", O_RDWR);
    if (bs->fd < 0) {
        fprintf(stderr,"binder: cannot open device (%s)\n",
                strerror(errno));
        goto fail_open;
    }

    if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) ||
        (vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) {
        fprintf(stderr, "binder: driver version differs from user space\n");
        goto fail_open;
    }

    bs->mapsize = mapsize;
    bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
    if (bs->mapped == MAP_FAILED) {
        fprintf(stderr,"binder: cannot map device (%s)\n",
                strerror(errno));
        goto fail_map;
    }

    return bs;

fail_map:
    close(bs->fd);
fail_open:
    free(bs);
    return NULL;
}

After these operations, service_manger enters the binder_loop.In the binder_loop function, readbuf stores BC_ENTER_LOOPER, followed by ioctl BINDER_WRITE_READ, and binder_parse parsing.The code is as follows

void binder_loop(struct binder_state *bs, binder_handler func)
{
    int res;
    struct binder_write_read bwr;
    uint32_t readbuf[32];

    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;

    readbuf[0] = BC_ENTER_LOOPER;
    binder_write(bs, readbuf, sizeof(uint32_t));

    for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (uintptr_t) readbuf;

        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);                            //Read Driven Get Data

        if (res < 0) {
            ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
            break;
        }

        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func); //Parsing data
        if (res == 0) {
            ALOGE("binder_loop: unexpected reply?!\n");
            break;
        }
        if (res < 0) {
            ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
            break;
        }
    }
}

) BC_ENTER_LOOPER is passed in binder_write to see what it does, and the code is as follows

int binder_write(struct binder_state *bs, void *data, size_t len)
{
    struct binder_write_read bwr;
    int res;

    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (uintptr_t) data;
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    if (res < 0) {
        fprintf(stderr,"binder_write: ioctl failed (%s)\n",
                strerror(errno));
    }
    return res;
}

We see that it constructs the binder_write_read structure and sends the BINDER_WRITE_READ directive through the binder_ioctl function.Let's go to the binder_ioctl function to see what the BINDER_WRITE_READ operation does.The code is as follows

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
    int ret;
    struct binder_proc *proc = filp->private_data;
    struct binder_thread *thread;
    unsigned int size = _IOC_SIZE(cmd);
    void __user *ubuf = (void __user *)arg;

    /*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n",
            proc->pid, current->pid, cmd, arg);*/

    ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
    if (ret)
        return ret;

    binder_lock(__func__);
    thread = binder_get_thread(proc);
    if (thread == NULL) {
        ret = -ENOMEM;
        goto err;
    }

    switch (cmd) {
    case BINDER_WRITE_READ: {
        struct binder_write_read bwr;
        if (size != sizeof(struct binder_write_read)) {
            ret = -EINVAL;
            goto err;
        }
        if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
            ret = -EFAULT;
            goto err;
        }
        binder_debug(BINDER_DEBUG_READ_WRITE,
                 "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n",
                 proc->pid, thread->pid, bwr.write_size, bwr.write_buffer,
                 bwr.read_size, bwr.read_buffer);

        if (bwr.write_size > 0) {
            ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
            if (ret < 0) {
                bwr.read_consumed = 0;
                if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                    ret = -EFAULT;
                goto err;
            }
        }
        if (bwr.read_size > 0) {
            ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
            if (!list_empty(&proc->todo))
                wake_up_interruptible(&proc->wait);
            if (ret < 0) {
                if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                    ret = -EFAULT;
                goto err;
            }
        }
        binder_debug(BINDER_DEBUG_READ_WRITE,
                 "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",
                 proc->pid, thread->pid, bwr.write_consumed, bwr.write_size,
                 bwr.read_consumed, bwr.read_size);
        if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
            ret = -EFAULT;
            goto err;
        }
        break;
    }
    case BINDER_SET_MAX_THREADS:
        if (copy_from_user(&proc->max_threads, ubuf, sizeof(proc->max_threads))) {
            ret = -EINVAL;
            goto err;
        }
        break;
    case BINDER_SET_CONTEXT_MGR:
        if (binder_context_mgr_node != NULL) {
            printk(KERN_ERR "binder: BINDER_SET_CONTEXT_MGR already set\n");
            ret = -EBUSY;
            goto err;
        }
        ret = security_binder_set_context_mgr(proc->tsk);
        if (ret < 0)
            goto err;
        if (binder_context_mgr_uid != -1) {
            if (binder_context_mgr_uid != current->cred->euid) {
                printk(KERN_ERR "binder: BINDER_SET_"
                       "CONTEXT_MGR bad uid %d != %d\n",
                       current->cred->euid,
                       binder_context_mgr_uid);
                ret = -EPERM;
                goto err;
            }
        } else
            binder_context_mgr_uid = current->cred->euid;
        binder_context_mgr_node = binder_new_node(proc, NULL, NULL);
        if (binder_context_mgr_node == NULL) {
            ret = -ENOMEM;
            goto err;
        }
        binder_context_mgr_node->local_weak_refs++;
        binder_context_mgr_node->local_strong_refs++;
        binder_context_mgr_node->has_strong_ref = 1;
        binder_context_mgr_node->has_weak_ref = 1;
        break;
    case BINDER_THREAD_EXIT:
        binder_debug(BINDER_DEBUG_THREADS, "binder: %d:%d exit\n",
                 proc->pid, thread->pid);
        binder_free_thread(proc, thread);
        thread = NULL;
        break;
    case BINDER_VERSION:
        if (size != sizeof(struct binder_version)) {
            ret = -EINVAL;
            goto err;
        }
        if (put_user(BINDER_CURRENT_PROTOCOL_VERSION, &((struct binder_version *)ubuf)->protocol_version)) {
            ret = -EINVAL;
            goto err;
        }
        break;
    default:
        ret = -EINVAL;
        goto err;
    }
    ret = 0;
err:
    if (thread)
        thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
    binder_unlock(__func__);
    wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
    if (ret && ret != -ERESTARTSYS)
        printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
    return ret;
}

We see that we constructed a binder_write_read structure and then used the copy_from_user function to copy the user-state data into the kernel (driver).Conveniently binder_thread_write is used to write data to a thread if it is necessary to write to it, as is the case with read operations.Finally, the binder_write_read structure is written back to the user layer.For all read operations, the header is BR_NOOP.For this kind of header processing, the binder_parse function is a direct break, doing hibernation processing.

For test_server, first binder_open, that is, open binder driver, then ioctl, then mmap.Then while loops, and if we pass in lookup, he calls svcmgr_lookup to get the service; if publish, it calls svcmgr_publish to register the service.

In general, test_server first sends BC_TRANSACTION through the binder_thread_write function, then calls the binder_thread_read function to get a BR_NOOP and waits for hibernation.Then service_manger obtains BR_TRANSACTION through binder_thread_read, sends a BC_REPLY through binder_thread_write, and finally test_server obtains BR_REPLY through binder_thread_read.

Let's focus on BC_TRANSACTION of the binder_thread_write function:

Construct data:

Construct binder_io;

b. To binder_transaction_data;

Put c. in the binder_write_read structure.

2. Send data through ioctl;

3. Drive in.binder_ioctl puts the data into the todo list of the service_manger process and wakes him up.

(a) Find the destination process service_manger (the space mapped by the previous mmap) based on the handle;

b. Put the data copy_from_user in the mmap space;

c. Process offset data, flat_binder_object: construct binder_node for test_server, construct binder_ref for service_manger, and increase reference count.

d. Wake-up processes.

This has been followed by round-tripping of binder_thread_write and binder_thread_read for the test_server and service_manger processes.

Of the CMDS involved, only BC_TRANSACTION, BR_TRANSACTION, BC_REPLY and BR_REPLY involve two processes, and all other CMDS are only APP and driver interactions for changing/reporting status.

* Let's summarize the registration and acquisition process for services.

The service registration process is as follows:

Construct data, including name = "hello" and flat_binder_node structures;

2. Send ioctl;

3. Find the service_manger process based on handle = 0 and place the data in the todo list of the service_manger;

4. Construct Structures.binder_node to the source process and binder_ref to the destination process;

Wake up service_manger;

6. Call the ADD_SERVICE function;

7. Create an item in svclist (mainly name = "hello" and handle);

8. binder_ref references the service, and the node points to the binder_node.

1 and 2 above are done in the test_server user state, 345 in the test_server kernel state, 67 in the service_manger user state, and 8 in the service_manger kernel state.

The service acquisition process is as follows:

Construct data (name = "hello");

2. Send data to service_manger via ioctl, handle = 0;

3. Find the service_manger according to handle = 0 and put the data into his todo list;

Wake up service_manger;

5. service_manger kernel state returns data;

6. service_manger user state fetches data and gets hello service;

7. Find an item in the svclist list based on the hello service name and get handle = 1;

8. Send handle s to drivers using ioctl;

9. service_manger finds binder_ref based on handle = 1 in the refs_by_desc tree of the kernel state, and then finds the binder_node of the hello service;

10. Create binder_ref for test_client and put handle = 1 in the todo list of test_cient;

Wake up tes_client;

12. The test_client kernel state returns handle = 1;

13. test_client user state gets handle = 1, and binder_ref.desc = 1, where the node corresponds to the previous hello service.

12 13 above is completed in the test_client user state, 34 12 in the test_client kernel state, 67 8 in the service_manger user state, and 59 10 11 in the service_manger kernel state.

Next, let's look at the service usage process, similar to the registration and acquisition process

Get the "hello" service, handle = 1;

2. Construct data. code refers to which function to call and which construct parameters to use.

3. Send data through ioctl (write before read);

4. binder_ioctl, find the target process based on handle; that is, test_server;

5. Put the data in the todo list of test_server;

6. Wake up test_server and sleep in binder_thread_read;

7. The test_server kernel state is awakened and data is returned to the test_server user state;

8. test_server user state takes out data and calls functions based on code and parameters;

9. Construct data with return values;

10. Reply to REPLY via ioctl;

11. The test_server kernel state finds the process to reply to, that is, test_client;

12. Put the data in the todo list of test_client;

Wake up test_client;

14. Kernel state is awakened to give user space data violations;

The test_client user state fetches the return value and the process is complete.

12 3 15 above was completed in test_client's user state, 4 5 6 14 in test_client's kernel state, and 8 9 10 in test_client's kernel state The user state of test_server is complete, and 7 11 12 13 is in the kernel state of test_server.

Posted by phpdeveloper82 on Mon, 16 Sep 2019 18:25:02 -0700