Talk about the Callback of collaborative process and write the model of collaborative process

Keywords: Java Android Design Pattern Back-end

The emergence of Coroutines has subverted the programming style of Java for many years. If you are the author of a third-party library, you may want to use Coroutines and Flow to make your Java callback based library more Kotlin and collaborative. On the other hand, if you are an API consumer, you may prefer to access the Coroutines style API to make it more friendly to Kotlin and make the development logic more linear.

Today, let's look at how to use Coroutine and Flow to simplify the API, and how to use suspendcancelablecoroutine and callbackflow APIs to build your own collaborative style adapter.

Callbacks

Callbacks is a very common solution for asynchronous communication. In fact, in most Java scenarios, we use them as solutions for the Java programming language. However, callbacks also have some disadvantages: This design leads to nested callbacks, which eventually leads to incomprehensible code. In addition, exception handling is also complex.

In Kotlin, you can use Coroutines to simplify calling Callbacks, but to do so, you need to establish your own adapter to convert the old Callbacks into Kotlin style collaborations.

Building an Adapter

In the collaboration process, Kotlin provides suspendcancelablecoroutine to adapt the one shot callback, and callbackFlow to adapt the callback in the data flow scenario.

In the following scenario, a simple callback example will be used to demonstrate this transformation.

One-shot async calls

Suppose we have a function of "NetAPI.getData" that returns a Data Callback. In the co process scenario, we want it to return a suspend function.

Therefore, we design an extension function for NetAPI to return the suspend function of Location, as shown below.

suspend fun NetAPI.awaitGetData(): Data

Because this is a one shot asynchronous operation, we can use the suspendcancelablecoroutine function. Suspendcancelablecoroutine executes the code block passed to it as a parameter, and then pause the execution of the current Coroutine while waiting for the signal to continue execution. When the resume or resumeWithException method in the Continuation object of Coroutine is called, Coroutine will resume execution.

// NetAPI extension function for returning Data
suspend fun NetAPI.awaitGetData(): Data =

    // Create a suspended cancelablecoroutine that can be cancelled
    suspendCancellableCoroutine<Data> { continuation ->

        val callback = object : NetCallback {
            override fun success(data: Data) {
                // Resume coroutine returns Data at the same time
                continuation.resume(data)
            }

            override fun error(e: String) {
                // Resume the coroutine 
                continuation.resumeWithException(e)
            }
        }
        addListener(callback)
        // End the execution of the suspendCancellableCoroutine block until the continuation parameter is invoked in any callback.
    }

It should be noted that the non cancellable version of suspendcancelablecoroutine (i.e. suspendCoroutine) can also be found in the Coroutines library, but it is best to always select suspendcancelablecoroutine to handle the cancellation of Coroutine Scope.

Principle behind suspend cancelable coroutine

From the internal implementation, suspendcancelablecoroutine uses suspendcoroutine uninnterceptedorreturn to obtain the Continuation of Coroutine in the suspend function. This Continuation object is intercepted by a cancelablecontinuation, which can be used to control the current Coroutine life cycle.

After that, the lambda passed to suspendcancelablecoroutine will be executed. If the lambda returns a result, Coroutine will resume immediately or will be suspended until cancelablecontinuation is manually resumed from the lambda.

The source code is as follows.

public suspend inline fun <T> suspendCancellableCoroutine(
  crossinline block: (CancellableContinuation<T>) -> Unit
): T =
  // Get the Continuation object of the coroutine that it's running this suspend function
  suspendCoroutineUninterceptedOrReturn { uCont ->

    // Take over the control of the coroutine. The Continuation's been
    // intercepted and it follows the CancellableContinuationImpl lifecycle now
    val cancellable = CancellableContinuationImpl(uCont.intercepted(), ...)
    /* ... */
 
    // Call block of code with the cancellable continuation
    block(cancellable)
        
    // Either suspend the coroutine and wait for the Continuation to be resumed
    // manually in `block` or return a result if `block` has finished executing
    cancellable.getResult()
  }

Streaming data

If we want to get multiple data streams (using the NetAPI.getDataList function), we need to create a data stream using Flow. The ideal API should be like this.

fun NetAPI.getDataListFlow(): Flow<Data>

To convert the callback based streaming API to flow, we need to use the callback flow builder that creates the flow. In the callback flow lambda, we are in the context of Coroutine, so we can call the suspend function. Unlike the flow builder, callbackFlow allows values to be issued from different CoroutineContext through the send function or outside the Coroutine through the offer function.

Typically, flow adapters that use callbackFlow follow these three common steps.

  • Create a callback and use offer to add elements to the flow.
  • Register the callback.
  • Wait for the consumer to cancel the looper and unregister the callback.

The sample code is shown below.

// Send Data updates to consumer s
fun NetAPI.getDataListFlow() = callbackFlow<Data> {
  // Currently, a new Flow will be created in a collaboration scope

  // 1. Create a callback and use offer to add elements to the stream
  val callback = object : NetCallback() {
    override fun success(result: Result?) {
      result ?: return // Ignore null responses
      for (data in result.datas) {
        try {
          offer(data) // Add element to flow
        } catch (t: Throwable) {
          // exception handling 
        }
      }
    }
  }

  // 2. Register the callback to get the data flow
  requestDataUpdates(callback).addOnFailureListener { e ->
    close(e) // close in case of exception
  }

  // 3. Wait for the consumer to cancel the loop program and cancel the registration of the callback. This will suspend the current collaboration until the flow is closed
  awaitClose {
    // Remove listening
    removeLocationUpdates(callback)
  }
}

Principle behind callbackFlow

Within the process, callbackFlow uses channel, which is conceptually very similar to blocking queue. All channels have capacity configuration, which limits the maximum number of bufferable elements.

The default capacity of channels created in callbackFlow is 64 elements. When you try to add a new element to a full channel, the send function will suspend the data provider until the new element has space to join the channel. The offer will not add relevant elements to the channel and will immediately return false.

Principle behind awaitClose

The implementation principle of awaitClose is actually the same as suspend cancelablecoroutine. Refer to the comments in the following code.

public suspend fun ProducerScope<*>.awaitClose(block: () -> Unit = {}) {
  ...
  try {
    // Suspend the coroutine with a cancellable continuation
    suspendCancellableCoroutine<Unit> { cont ->
      // Suspend forever and resume the coroutine successfully only 
      // when the Flow/Channel is closed
      invokeOnClose { cont.resume(Unit) }
    }
  } finally {
    // Always execute caller's clean up code
    block()
  }
}

What's the use?

What's the use of converting callback based API s into data streams? Let's take the most commonly used View.setOnClickListener as an example. It can be regarded as either a one shot scenario or a data flow scenario.

Let's rewrite it into the form of suspend cancelable coroutine, and the code is as follows.

suspend fun View.awaitClick(block: () -> Unit): View = suspendCancellableCoroutine { continuation ->
    setOnClickListener { view ->
        if (view == null) {
            continuation.resumeWithException(Exception("error"))
        } else {
            block()
            continuation.resume(view)
        }
    }
}

use:
lifecycleScope.launch {
    binding.test.awaitClick {
        Toast.makeText(this@MainActivity, "loading", Toast.LENGTH_LONG).show()
    }
}

Well, it's a bit hard to say. I'm almost taking off my pants and farting. Let's change it into a data flow scenario.

fun View.clickFlow(): Flow<View> {
    return callbackFlow {
        setOnClickListener {
            trySend(it) // The offer function is Deprecated, and trySend is used instead
        }
        awaitClose { setOnClickListener(null) }
    }
}

use:
lifecycleScope.launch {
    binding.test.clickFlow().collect {
        Toast.makeText(this@MainActivity, "loading", Toast.LENGTH_LONG).show()
    }
}

Well, the fart is completely released.

It can be found that in this scenario, forcibly applying this mode is not useful, but will make others think you are mentally retarded.

So what scenarios need to be used? We can think about why we need Callbback.

Most of the callback shell scenarios are asynchronous requests, that is, those with blocking, or data flow data output. Therefore, this kind of callback is only a callback that calls a closure. In fact, it can't be called a callback. It's just a lambda. Therefore, let's take another example.

There is now a TextView that displays the input from an Edittext. Such a scenario is a clear data Flow scenario, which mainly uses the afterTextChanged callback in Edittext's TextWatcher. We rewrite it into Flow form, and the code is as follows.

fun EditText.afterTextChangedFlow(): Flow<Editable?> {
    return callbackFlow {
        val watcher = object : TextWatcher {
            override fun afterTextChanged(s: Editable?) {
                trySend(s)
            }

            override fun beforeTextChanged(s: CharSequence?, start: Int, count: Int, after: Int) {}

            override fun onTextChanged(s: CharSequence?, start: Int, before: Int, count: Int) {}
        }
        addTextChangedListener(watcher)
        awaitClose { removeTextChangedListener(watcher) }
    }
}

use:
lifecycleScope.launch {
    with(binding) {
        test.afterTextChangedFlow().collect { show.text = it }
    }
}

It's interesting. I didn't write a callback, but I also got the data stream. Well, it's actually a bit "forced".

However, once it becomes Flow, it becomes very interesting. This is Flow. We can use so many Flow operators to do a lot of interesting things.

For example, we can limit the current of the input box. This scenario is very common. For example, in search, the content entered by the user will be searched automatically, but you can't search as soon as you enter the content, which will produce a large number of invalid search content. Therefore, this scenario also has a special term - input box anti shake.

In the past, RxJava was mostly used to deal with similar requirements, but now we have Flow, which can still complete this function in the scenario of meeting the collaborative API.

We can add debounce.

lifecycleScope.launch {
    with(binding) {
        test.afterTextChangedFlow()
            .buffer(Channel.CONFLATED)
            .debounce(300)
            .collect {
                show.text = it
                // Some business processing
                viewModel.getSearchResult(it)
            }
    }
}

You can even add a back pressure strategy and a debounce to complete the data collection after the flow stops.

Of course, you can also write buffer and debounce directly to the Flow returned by afterTextChangedFlow as the default processing of the current scene.

Posted by jeff_papciak on Thu, 04 Nov 2021 14:41:06 -0700