Android Volley Complete Resolution Understanding Volley from the Source Code Perspective

Keywords: network Mobile Android

Reprinted from: http://blog.csdn.net/guolin_blog/article/details/17656437


After studying the first three articles, we have almost mastered the use of Volley, but I am afraid many friends are not very clear about the working principle of Volley. So in this article, let's read Volley's source code and sort out its workflow as a whole. This is also the last article in the Volley series.


In fact, Volley's official document itself contains a Volley's working flow chart, as shown in the following figure.




Most of my friends suddenly see a picture like this. Should it be like me and feel confused? Yes, we don't have a conceptual understanding of the working principle behind Volley at present, so it's hard to look directly at this picture. But that's OK. Now let's analyze Volley's source code, and then look at this picture again, it will be much better to understand.


Speaking of source code analysis, where should we start? Let's go back to the use of Volley. Remember, the first step in using Volley is to call the Volley.newRequestQueue(context) method to get a RequestQueue object, so naturally we need to start with this method. The code is as follows:

  1. public static RequestQueue newRequestQueue(Context context) {  
  2.     return newRequestQueue(context, null);  
  3. }  
This method has only one line of code, just calls the method of newRequestQueue() to overload and passes null to the second parameter. Let's look at the code in the newRequestQueue() method with two parameters, as follows:
  1. public static RequestQueue newRequestQueue(Context context, HttpStack stack) {  
  2.     File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);  
  3.     String userAgent = "volley/0";  
  4.     try {  
  5.         String packageName = context.getPackageName();  
  6.         PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);  
  7.         userAgent = packageName + "/" + info.versionCode;  
  8.     } catch (NameNotFoundException e) {  
  9.     }  
  10.     if (stack == null) {  
  11.         if (Build.VERSION.SDK_INT >= 9) {  
  12.             stack = new HurlStack();  
  13.         } else {  
  14.             stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));  
  15.         }  
  16.     }  
  17.     Network network = new BasicNetwork(stack);  
  18.     RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);  
  19.     queue.start();  
  20.     return queue;  
  21. }  
As you can see, in line 10, if stack is equal to null, create an HttpStack object. In this case, if the version number of the mobile phone system is greater than 9, create an instance of HurlStack, or else create an instance of HttpClientStack. In fact, HurlStack uses HttpURLConnection for network communication, while HttpClientStack uses HttpClient for network communication. Why do you choose this? You can refer to an article I translated earlier. Android accesses the network, using HttpURLConnection or HttpClient?


After creating HttpStack, a Network object is created to process network requests based on the incoming HttpStack object. Next, a new RequestQueue object is generated, and its start() method is called to start it. Then the RequestQueue is returned, so that the method of new RequestQueue () is finished.


So what exactly does the start() method of RequestQueue execute internally? Let's follow up and see:

  1. public void start() {  
  2.     stop();  // Make sure any currently running dispatchers are stopped.  
  3.     // Create the cache dispatcher and start it.  
  4.     mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);  
  5.     mCacheDispatcher.start();  
  6.     // Create network dispatchers (and corresponding threads) up to the pool size.  
  7.     for (int i = 0; i < mDispatchers.length; i++) {  
  8.         NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,  
  9.                 mCache, mDelivery);  
  10.         mDispatchers[i] = networkDispatcher;  
  11.         networkDispatcher.start();  
  12.     }  
  13. }  
Here we first create an instance of CacheDispatcher, then call its start() method, then create an instance of NetworkDispatcher in a for loop, and call their start() methods separately. Cache Dispatcher and Network Dispatcher are both inherited from Thread. By default, the for loop executes four times. That is to say, after calling Volley. new RequestQueue (context), five threads are running in the background, waiting for the arrival of network requests. Cache Dispatcher is a cache thread and Network Dispatcher is a network request thread.


After obtaining the RequestQueue, we only need to construct the corresponding Request, and then call the add() method of RequestQueue to pass in the Request to complete the network request operation. Needless to say, there must be very complex logic inside the add() method. Let's take a look at it together:

  1. public <T> Request<T> add(Request<T> request) {  
  2.     // Tag the request as belonging to this queue and add it to the set of current requests.  
  3.     request.setRequestQueue(this);  
  4.     synchronized (mCurrentRequests) {  
  5.         mCurrentRequests.add(request);  
  6.     }  
  7.     // Process requests in the order they are added.  
  8.     request.setSequence(getSequenceNumber());  
  9.     request.addMarker("add-to-queue");  
  10.     // If the request is uncacheable, skip the cache queue and go straight to the network.  
  11.     if (!request.shouldCache()) {  
  12.         mNetworkQueue.add(request);  
  13.         return request;  
  14.     }  
  15.     // Insert request into stage if there's already a request with the same cache key in flight.  
  16.     synchronized (mWaitingRequests) {  
  17.         String cacheKey = request.getCacheKey();  
  18.         if (mWaitingRequests.containsKey(cacheKey)) {  
  19.             // There is already a request in flight. Queue up.  
  20.             Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);  
  21.             if (stagedRequests == null) {  
  22.                 stagedRequests = new LinkedList<Request<?>>();  
  23.             }  
  24.             stagedRequests.add(request);  
  25.             mWaitingRequests.put(cacheKey, stagedRequests);  
  26.             if (VolleyLog.DEBUG) {  
  27.                 VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);  
  28.             }  
  29.         } else {  
  30.             // Insert 'null' queue for this cacheKey, indicating there is now a request in  
  31.             // flight.  
  32.             mWaitingRequests.put(cacheKey, null);  
  33.             mCacheQueue.add(request);  
  34.         }  
  35.         return request;  
  36.     }  
  37. }  
As you can see, the current request can be cached at line 11, and if it cannot be cached, it can be added directly to the network request queue at line 12, or to the cache queue at line 33 If it can be cached. By default, each request can be cached, and of course we can call the setShouldCache(false) method of Request to change this default behavior.


OK, then since by default every request is cacheable, it is naturally added to the cache queue, so the cache thread that has been waiting in the background is about to start running. Let's look at the run() method in Cache Dispatcher. The code is as follows:

  1. public class CacheDispatcher extends Thread {  
  2.   
  3.     ......  
  4.   
  5.     @Override  
  6.     public void run() {  
  7.         if (DEBUG) VolleyLog.v("start new dispatcher");  
  8.         Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);  
  9.         // Make a blocking call to initialize the cache.  
  10.         mCache.initialize();  
  11.         while (true) {  
  12.             try {  
  13.                 // Get a request from the cache triage queue, blocking until  
  14.                 // at least one is available.  
  15.                 final Request<?> request = mCacheQueue.take();  
  16.                 request.addMarker("cache-queue-take");  
  17.                 // If the request has been canceled, don't bother dispatching it.  
  18.                 if (request.isCanceled()) {  
  19.                     request.finish("cache-discard-canceled");  
  20.                     continue;  
  21.                 }  
  22.                 // Attempt to retrieve this item from cache.  
  23.                 Cache.Entry entry = mCache.get(request.getCacheKey());  
  24.                 if (entry == null) {  
  25.                     request.addMarker("cache-miss");  
  26.                     // Cache miss; send off to the network dispatcher.  
  27.                     mNetworkQueue.put(request);  
  28.                     continue;  
  29.                 }  
  30.                 // If it is completely expired, just send it to the network.  
  31.                 if (entry.isExpired()) {  
  32.                     request.addMarker("cache-hit-expired");  
  33.                     request.setCacheEntry(entry);  
  34.                     mNetworkQueue.put(request);  
  35.                     continue;  
  36.                 }  
  37.                 // We have a cache hit; parse its data for delivery back to the request.  
  38.                 request.addMarker("cache-hit");  
  39.                 Response<?> response = request.parseNetworkResponse(  
  40.                         new NetworkResponse(entry.data, entry.responseHeaders));  
  41.                 request.addMarker("cache-hit-parsed");  
  42.                 if (!entry.refreshNeeded()) {  
  43.                     // Completely unexpired cache hit. Just deliver the response.  
  44.                     mDelivery.postResponse(request, response);  
  45.                 } else {  
  46.                     // Soft-expired cache hit. We can deliver the cached response,  
  47.                     // but we need to also send the request to the network for  
  48.                     // refreshing.  
  49.                     request.addMarker("cache-hit-refresh-needed");  
  50.                     request.setCacheEntry(entry);  
  51.                     // Mark the response as intermediate.  
  52.                     response.intermediate = true;  
  53.                     // Post the intermediate response back to the user and have  
  54.                     // the delivery then forward the request along to the network.  
  55.                     mDelivery.postResponse(request, response, new Runnable() {  
  56.                         @Override  
  57.                         public void run() {  
  58.                             try {  
  59.                                 mNetworkQueue.put(request);  
  60.                             } catch (InterruptedException e) {  
  61.                                 // Not much we can do about this.  
  62.                             }  
  63.                         }  
  64.                     });  
  65.                 }  
  66.             } catch (InterruptedException e) {  
  67.                 // We may have been interrupted because it was time to quit.  
  68.                 if (mQuit) {  
  69.                     return;  
  70.                 }  
  71.                 continue;  
  72.             }  
  73.         }  
  74.     }  
  75. }  
The code is a bit long, so we'll just focus on it. Firstly, you can see a while(true) loop in line 11, which shows that the cache thread is always running. Then, on line 23, you try to extract the response from the cache. If it is empty, you can add the request to the network request queue. If it is not empty, you can judge whether the cache has expired or if it has expired, you can add the request to the network request as well. In the queue, otherwise, it is considered that there is no need to retransmit the network request, and the data in the cache can be used directly. Next, the parseNetworkResponse() method of Request is called on line 39 to parse the data, and then the parsed data is called back. This part of the code is skipped first because its logic is basically the same as that of the latter half of the NetworkDispatcher. So let's just merge it later. Let's first look at what is in the NetworkDispatcher. How to handle the network request queue, the code is as follows:
  1. public class NetworkDispatcher extends Thread {  
  2.     ......  
  3.     @Override  
  4.     public void run() {  
  5.         Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);  
  6.         Request<?> request;  
  7.         while (true) {  
  8.             try {  
  9.                 // Take a request from the queue.  
  10.                 request = mQueue.take();  
  11.             } catch (InterruptedException e) {  
  12.                 // We may have been interrupted because it was time to quit.  
  13.                 if (mQuit) {  
  14.                     return;  
  15.                 }  
  16.                 continue;  
  17.             }  
  18.             try {  
  19.                 request.addMarker("network-queue-take");  
  20.                 // If the request was cancelled already, do not perform the  
  21.                 // network request.  
  22.                 if (request.isCanceled()) {  
  23.                     request.finish("network-discard-cancelled");  
  24.                     continue;  
  25.                 }  
  26.                 addTrafficStatsTag(request);  
  27.                 // Perform the network request.  
  28.                 NetworkResponse networkResponse = mNetwork.performRequest(request);  
  29.                 request.addMarker("network-http-complete");  
  30.                 // If the server returned 304 AND we delivered a response already,  
  31.                 // we're done -- don't deliver a second identical response.  
  32.                 if (networkResponse.notModified && request.hasHadResponseDelivered()) {  
  33.                     request.finish("not-modified");  
  34.                     continue;  
  35.                 }  
  36.                 // Parse the response here on the worker thread.  
  37.                 Response<?> response = request.parseNetworkResponse(networkResponse);  
  38.                 request.addMarker("network-parse-complete");  
  39.                 // Write to cache if applicable.  
  40.                 // TODO: Only update cache metadata instead of entire record for 304s.  
  41.                 if (request.shouldCache() && response.cacheEntry != null) {  
  42.                     mCache.put(request.getCacheKey(), response.cacheEntry);  
  43.                     request.addMarker("network-cache-written");  
  44.                 }  
  45.                 // Post the response back.  
  46.                 request.markDelivered();  
  47.                 mDelivery.postResponse(request, response);  
  48.             } catch (VolleyError volleyError) {  
  49.                 parseAndDeliverNetworkError(request, volleyError);  
  50.             } catch (Exception e) {  
  51.                 VolleyLog.e(e, "Unhandled exception %s", e.toString());  
  52.                 mDelivery.postError(request, new VolleyError(e));  
  53.             }  
  54.         }  
  55.     }  
  56. }  
Similarly, in line 7 we see a similar while(true) loop, indicating that the network request thread is also running continuously. At line 28, the Network's performRequest() method is called to send network requests, and Network is an interface. The specific implementation here is Basic Network. Let's look at its performRequest() method, as follows:
  1. public class BasicNetwork implements Network {  
  2.     ......  
  3.     @Override  
  4.     public NetworkResponse performRequest(Request<?> request) throws VolleyError {  
  5.         long requestStart = SystemClock.elapsedRealtime();  
  6.         while (true) {  
  7.             HttpResponse httpResponse = null;  
  8.             byte[] responseContents = null;  
  9.             Map<String, String> responseHeaders = new HashMap<String, String>();  
  10.             try {  
  11.                 // Gather headers.  
  12.                 Map<String, String> headers = new HashMap<String, String>();  
  13.                 addCacheHeaders(headers, request.getCacheEntry());  
  14.                 httpResponse = mHttpStack.performRequest(request, headers);  
  15.                 StatusLine statusLine = httpResponse.getStatusLine();  
  16.                 int statusCode = statusLine.getStatusCode();  
  17.                 responseHeaders = convertHeaders(httpResponse.getAllHeaders());  
  18.                 // Handle cache validation.  
  19.                 if (statusCode == HttpStatus.SC_NOT_MODIFIED) {  
  20.                     return new NetworkResponse(HttpStatus.SC_NOT_MODIFIED,  
  21.                             request.getCacheEntry() == null ? null : request.getCacheEntry().data,  
  22.                             responseHeaders, true);  
  23.                 }  
  24.                 // Some responses such as 204s do not have content.  We must check.  
  25.                 if (httpResponse.getEntity() != null) {  
  26.                   responseContents = entityToBytes(httpResponse.getEntity());  
  27.                 } else {  
  28.                   // Add 0 byte response as a way of honestly representing a  
  29.                   // no-content request.  
  30.                   responseContents = new byte[0];  
  31.                 }  
  32.                 // if the request is slow, log it.  
  33.                 long requestLifetime = SystemClock.elapsedRealtime() - requestStart;  
  34.                 logSlowRequests(requestLifetime, request, responseContents, statusLine);  
  35.                 if (statusCode < 200 || statusCode > 299) {  
  36.                     throw new IOException();  
  37.                 }  
  38.                 return new NetworkResponse(statusCode, responseContents, responseHeaders, false);  
  39.             } catch (Exception e) {  
  40.                 ......  
  41.             }  
  42.         }  
  43.     }  
  44. }  

Most of these methods are about the details of network requests. We don't need to pay much attention to them. We need to note that line 14 calls the performRequest() method of HttpStack, where HttpStack calls the new RequestQueue () method from the beginning to create an instance. By default, if the system version number is greater than 9, the HulStack object will be created, otherwise the H_ttpStack object will be created. TtpClientStack object. As mentioned earlier, the internal reality of these two objects is to use HttpURLConnection and HttpClient to send network requests respectively. We will no longer follow up to read them, and then assemble the data returned by the server into a NetworkResponse object for return.


The parseNetworkResponse() method of Request is called to parse the data in NetworkResponse and write the data to the cache after the return value of NetworkResponse is received in NetworkDispatcher. This method is implemented by subclasses of Request, because different kinds of Request parsing methods are certainly different. Remember the way we learned about custom Request in the last article? The parseNetworkResponse() method has to be rewritten.


After parsing the data in NetworkResponse, Executor Delivery's postResponse() method is called to call back the parsed data. The code is as follows:

  1. public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {  
  2.     request.markDelivered();  
  3.     request.addMarker("post-response");  
  4.     mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));  
  5. }  
In the execute() method of mResponsePoster, a ResponseDelivery Runnable object is introduced to ensure that the run() method in the object runs in the main thread. Let's see what the code in the run() method looks like:
  1. private class ResponseDeliveryRunnable implements Runnable {  
  2.     private final Request mRequest;  
  3.     private final Response mResponse;  
  4.     private final Runnable mRunnable;  
  5.   
  6.     public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {  
  7.         mRequest = request;  
  8.         mResponse = response;  
  9.         mRunnable = runnable;  
  10.     }  
  11.   
  12.     @SuppressWarnings("unchecked")  
  13.     @Override  
  14.     public void run() {  
  15.         // If this request has canceled, finish it and don't deliver.  
  16.         if (mRequest.isCanceled()) {  
  17.             mRequest.finish("canceled-at-delivery");  
  18.             return;  
  19.         }  
  20.         // Deliver a normal response or error, depending.  
  21.         if (mResponse.isSuccess()) {  
  22.             mRequest.deliverResponse(mResponse.result);  
  23.         } else {  
  24.             mRequest.deliverError(mResponse.error);  
  25.         }  
  26.         // If this is an intermediate response, add a marker, otherwise we're done  
  27.         // and the request can be finished.  
  28.         if (mResponse.intermediate) {  
  29.             mRequest.addMarker("intermediate-response");  
  30.         } else {  
  31.             mRequest.finish("done");  
  32.         }  
  33.         // If we have been provided a post-delivery runnable, run it.  
  34.         if (mRunnable != null) {  
  35.             mRunnable.run();  
  36.         }  
  37.    }  
  38. }  

Although the code is not much, we do not need to read line by line, just focus on it. The delivery Response () method of Request is called in line 22. Do you feel familiar with it? Yes, this is another method we need to rewrite when we customize the Request. The response of each network request is called back to this method. Finally, in this method, we can call back the response data to the onResponse() method of Response.Listener.


Well, at this point, we've sorted out the whole process of Volley's execution. Do you already feel very clear? By the way, remember the flowchart at the beginning of the article, which was not understood just now. Now let's look at it again.




The blue part represents the main thread, the green part represents the cache thread, and the orange part represents the network thread. We call the add() method of RequestQueue in the main thread to add a network request, which is first added to the cache queue. If we find that the corresponding cache results can be found, we read the cache directly and parse it, and then call back to the main thread. If no result is found in the cache, add the request to the network request queue, then process the sending HTTP request, parse the response result, write to the cache, and call back the main thread.


Well, do you think it's easy to understand this picture now? Okay, so far we've learned all about Volley's usage and source code. I'm sure you are familiar with Volley and can apply it to practical projects. So this is the end of Volley's full analysis series. Thank you for your patience to see the end.

Posted by ngu_tri on Sun, 19 May 2019 20:14:20 -0700